Compare commits

..

57 Commits

Author SHA1 Message Date
278386c3d5 Bump version to v3.0.3
All checks were successful
continuous-integration/drone/push Build is passing
2024-11-18 15:15:27 -08:00
1bd66e42de Fix broken py37 build
All checks were successful
continuous-integration/drone/push Build is passing
Forgot walrus aren't supported in py37
2024-11-18 15:14:57 -08:00
04fa347d28 Bump version to v3.0.2
Some checks failed
continuous-integration/drone/push Build is failing
continuous-integration/drone/tag Build is failing
2024-11-18 11:40:37 -08:00
b76826a873 Fix extracting named members from tar file 2024-11-18 11:39:49 -08:00
583cd2b0bb Fix broken extract all files flag 2024-11-18 11:39:13 -08:00
f9c462b94a Improve debug logging
This also includes one fix that I discovered while improving the
logging. Even if a git url was provided, release_gitter was lookig for a
local package declaration (Cargo.toml) to identify the version.

With this change, the url parsing and the local repo logic are split
allowing for more detailed logging as well as avoiding this potential
bug.
2024-11-18 11:36:40 -08:00
b59e908d84 Bump version to v3.0.1
Some checks reported errors
continuous-integration/drone/push Build was killed
continuous-integration/drone/tag Build is passing
2024-11-11 12:50:55 -08:00
3059b36908 Remove conflicting cli arguments 2024-11-11 12:50:32 -08:00
29df64c07b Build: Try to cache pip between tests
Some checks reported errors
continuous-integration/drone/push Build was killed
2024-11-11 12:47:04 -08:00
c1dd243035 Bump version to v3.0.0 because of possible breaking change
Some checks reported errors
continuous-integration/drone/push Build was killed
continuous-integration/drone/tag Build is passing
2024-11-11 12:30:32 -08:00
0bb2277e26 BREAKING: Add the ability to download and extract to a temp dir
All checks were successful
continuous-integration/drone/push Build is passing
This also will execute any --exec scripts from the dest directory
2024-11-11 12:29:17 -08:00
6fe0869e8b Bump version to v2.5.2
Some checks reported errors
continuous-integration/drone/push Build was killed
continuous-integration/drone/tag Build is passing
2024-11-07 16:01:05 -08:00
16fb7ca849 Forgot one spot 2024-11-07 16:00:48 -08:00
bcf65ce10f Bump version to v2.5.1
Some checks reported errors
continuous-integration/drone/push Build was killed
continuous-integration/drone/tag Build is passing
2024-11-07 15:39:47 -08:00
4eead212cf Allow format values in exec calls 2024-11-07 15:39:12 -08:00
b9600cb631 Bump version to v2.5.0
Some checks reported errors
continuous-integration/drone/push Build was killed
continuous-integration/drone/tag Build is passing
2024-11-07 11:41:35 -08:00
7ecbf2c5cd Add pseudo_builder to coverage
All checks were successful
continuous-integration/drone/push Build is passing
Even though nothing is reported because an integration test is run
by using a subshell
2024-11-06 20:22:41 -08:00
35b07836e8 Allow templating values into extract file names
All checks were successful
continuous-integration/drone/push Build is passing
2024-11-06 16:21:19 -08:00
7380fa99ec Allow specifying a name for the package
Default to repo name
2024-11-06 16:17:58 -08:00
bb0b82ab72 Add itest for pseudobuilder
All checks were successful
continuous-integration/drone/push Build is passing
2024-11-06 16:13:58 -08:00
564a120bfe Bump version to v2.4.0
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/tag Build is passing
2024-10-31 13:26:05 -07:00
e58f1fd7b1 Download and extract files in wheel scripts dir
All checks were successful
continuous-integration/drone/push Build is passing
This avoids conflicting file names with the root cwd
2024-10-31 13:19:47 -07:00
75c37b4aa7 Bump version to v2.3.0
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/tag Build is passing
2024-05-14 14:15:35 -07:00
ef7160fe7c Include default system and arch synonyms
Some projects use different system and arch names in their assets.
Sometimes due to convention or differeing tools and systems. For
example, on macOS 13.6, Python will return the system as `Darwin`.
However, some release assets will be named `macOS` or `macos`. Similarly
`arm64` and `aarch64` are used interchangeably.

This patch adds a few lists of synonymous values such that
release-gitter can make an attempt at matching the intended binary.
These lists of synonyms can be expanded to be more complete as time goes
on.

These synonyms are only used if there is no user provided mapping. In
the case that any user provided mapping exists, the map will be the
sole source of truth. Eg. If you provide a map for `Windows=>windows`,
no other values will be mapped and we won't assume that `Darwin=>macos`
anymore.
2024-05-14 14:15:35 -07:00
a6c839a31e Remove typing.Optional from tests 2024-05-14 14:15:35 -07:00
ec401c7d6a Add python12 to test matrix
Still testing against 3.7, even though it's EOL
2024-05-14 14:15:35 -07:00
7a5bed0454 Switch from reorder-python-imports to isort 2024-05-14 14:15:35 -07:00
d639b868a1 Update pre-commit hooks 2024-05-14 14:01:40 -07:00
ddf509e9a4 Bump patch version to 2.2.1
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/tag Build is passing
2023-10-31 20:42:28 -07:00
fbb38a9d7d Make git-url optional again for pseudo_builder
All checks were successful
continuous-integration/drone/push Build is passing
2023-10-31 20:41:40 -07:00
f0ab45f0c6 Make sure hatch is installed when verifying tag
Some checks reported errors
continuous-integration/drone/push Build was killed
continuous-integration/drone/tag Build is passing
2023-10-27 15:41:30 -07:00
3eb5fb3d75 Bump version to v2.2.0
Some checks failed
continuous-integration/drone/tag Build is failing
continuous-integration/drone/push Build is passing
2023-10-27 15:32:43 -07:00
f1352658ae Update pseudo_builder to be able to include extra files 2023-10-27 15:32:43 -07:00
b8b81825f6 Skip upload pipeline entirely when not tagging for now 2023-10-27 15:32:30 -07:00
daedacb35f Really require python 3.7
All checks were successful
continuous-integration/drone/push Build is passing
2023-10-27 15:27:04 -07:00
09a7d38bc7 Skip upload to test pypi because we can't overwrite 2023-10-27 15:26:46 -07:00
ff803dbc31 Use hatch dynamic version so that we can increment before test uploads
Some checks failed
continuous-integration/drone/push Build is failing
2023-10-27 14:25:02 -07:00
5ba06140dc Add linting back in
Some checks failed
continuous-integration/drone/push Build is failing
2023-10-27 14:04:46 -07:00
302258ce6c Fix test upload to use hatch publish
Some checks failed
continuous-integration/drone/push Build is failing
2023-10-27 13:52:03 -07:00
5423c04df6 Clean venv before trying to run hatch again
Some checks failed
continuous-integration/drone/push Build is failing
2023-10-27 13:46:32 -07:00
30801c5927 Switch from tox to hatch
Some checks failed
continuous-integration/drone/push Build is failing
2023-10-27 13:41:46 -07:00
8b9ff334a5 Switch to pyproject
Some checks failed
continuous-integration/drone/push Build is failing
2023-10-26 17:32:28 -07:00
08773d61b7 Bump version to v2.1.1
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/tag Build is passing
2023-06-26 16:28:58 -07:00
6726931916 Run tests on python 3.11
All checks were successful
continuous-integration/drone/push Build is passing
2023-06-26 16:26:39 -07:00
b0e327e2cd Remove walrus operator to fix for python3.7
All checks were successful
continuous-integration/drone/push Build is passing
Python 3.7 goes into end of support mode tomorrow, 2023-06-27, but will
likely be in wide use for some time after
2023-06-26 16:14:34 -07:00
ab1f25304b Correct some argument help strings
Some checks failed
continuous-integration/drone/push Build is failing
2023-06-12 11:08:45 -07:00
dfc12ed79e Raise exception if trying to extract a member that doesn't exist 2023-06-12 11:08:21 -07:00
de7fe72cec Bump version to v2.1.0
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/tag Build is passing
2023-06-05 11:45:49 -07:00
0f46808403 Add verbose flag to print version and asset downloaded
All checks were successful
continuous-integration/drone/push Build is passing
2023-06-05 11:45:22 -07:00
face8e9af0 Bump version to 2.0.0
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/tag Build is passing
Changed behavior around pre-release versions
2023-05-22 17:11:38 -07:00
d555284a01 Avoid installing pre-release versions unless explicitly asked
All checks were successful
continuous-integration/drone/push Build is passing
2023-05-22 17:09:55 -07:00
869b0b25b4 Fix passing of version when using download_release 2023-05-22 17:05:28 -07:00
d6c0673a1d Use new type annotations introduced in Python 3.10
All checks were successful
continuous-integration/drone/push Build is passing
2022-10-11 12:42:07 -07:00
d48daaab10 Update dev requirements
Make sure mypy and type stubs are installed in dev environment. They
are already used for linting.
2022-10-11 12:42:07 -07:00
e147fad63c Bump version to 1.2.0
All checks were successful
continuous-integration/drone/tag Build is passing
2022-10-11 12:41:30 -07:00
ab0603d1b9 Improve content type detection
All checks were successful
continuous-integration/drone/push Build is passing
Cycle through detected content types and use the first supported one.

Adds tests to cover cases of priority and exceptions.
2022-10-11 12:20:57 -07:00
e6a269af3d Add application/x-tar+xz as a known content type 2022-10-11 12:20:08 -07:00
13 changed files with 858 additions and 246 deletions

View File

@ -5,6 +5,8 @@ PYTHON_VERSIONS = [
"3.8",
"3.9",
"3.10",
"3.11",
"3.12",
"latest",
]
@ -14,6 +16,19 @@ def main(ctx):
# Run tests
pipelines += tests()
pipelines += [{
"kind": "pipeline",
"name": "lint",
"workspace": get_workspace(),
"steps": [{
"name": "lint",
"image": "python:3",
"commands": [
"python -V",
"make lint",
]
}]
}]
# Add pypi push pipeline
pipelines += push_to_pypi()
@ -42,25 +57,24 @@ def tests():
"name": "tests",
"workspace": get_workspace(),
"steps": [
tox_step("python:"+version)
test_step("python:"+version)
for version in PYTHON_VERSIONS
],
}]
# Builds a single python test step
def tox_step(docker_tag, python_cmd="python", tox_env="py3"):
def test_step(docker_tag, python_cmd="python"):
return {
"name": "test {}".format(docker_tag.replace(":", "")),
"image": docker_tag,
"environment": {
"TOXENV": tox_env,
},
"commands": [
"{} -V".format(python_cmd),
"pip install tox",
"tox",
"make clean-all test"
],
"environment": {
"PIP_CACHE_DIR": ".pip-cache",
},
}
@ -109,36 +123,36 @@ def push_to_pypi():
return [{
"kind": "pipeline",
"name": "deploy to pypi",
"depends_on": ["tests"],
"depends_on": ["tests", "lint"],
"workspace": get_workspace(),
"trigger": {
"ref": [
"refs/heads/main",
# "refs/heads/main",
"refs/tags/v*",
],
},
"steps": [
{
"name": "push to test pypi",
"image": "python:3",
"environment": {
"TWINE_USERNAME": {
"from_secret": "PYPI_USERNAME",
},
"TWINE_PASSWORD": {
"from_secret": "TEST_PYPI_PASSWORD",
},
},
"commands": ["make upload-test"],
},
# {
# "name": "push to test pypi",
# "image": "python:3",
# "environment": {
# "HATCH_INDEX_USER": {
# "from_secret": "PYPI_USERNAME",
# },
# "HATCH_INDEX_AUTH": {
# "from_secret": "TEST_PYPI_PASSWORD",
# },
# },
# "commands": ["make upload-test"],
# },
{
"name": "push to pypi",
"image": "python:3",
"environment": {
"TWINE_USERNAME": {
"HATCH_INDEX_USER": {
"from_secret": "PYPI_USERNAME",
},
"TWINE_PASSWORD": {
"HATCH_INDEX_AUTH": {
"from_secret": "PYPI_PASSWORD",
},
},

View File

@ -1,11 +1,11 @@
---
repos:
- repo: https://github.com/psf/black
rev: 22.3.0
rev: 24.4.2
hooks:
- id: black
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.1.0
rev: v4.6.0
hooks:
- id: check-added-large-files
- id: check-merge-conflict
@ -14,12 +14,12 @@ repos:
- id: trailing-whitespace
- id: name-tests-test
exclude: tests/(common.py|util.py|(helpers|integration/factories)/(.+).py)
- repo: https://github.com/asottile/reorder_python_imports
rev: v3.0.1
- repo: https://github.com/pycqa/isort
rev: 5.13.2
hooks:
- id: reorder-python-imports
- id: isort
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v0.942
rev: v1.10.0
hooks:
- id: mypy
exclude: docs/

View File

@ -5,7 +5,7 @@ ENV := venv
.PHONY: default
default: test
# Creates virtualenv
# Creates de virtualenv
$(ENV):
python3 -m venv $(ENV)
@ -13,87 +13,76 @@ $(ENV):
$(ENV)/bin/$(NAME): $(ENV)
$(ENV)/bin/pip install -r requirements-dev.txt
# Install tox into virtualenv for running tests
$(ENV)/bin/tox: $(ENV)
$(ENV)/bin/pip install tox
# Install wheel for building packages
$(ENV)/bin/wheel: $(ENV)
$(ENV)/bin/pip install wheel
# Install twine for uploading packages
$(ENV)/bin/twine: $(ENV)
$(ENV)/bin/pip install twine
# Install hatch into virtualenv for running tests
$(ENV)/bin/hatch: $(ENV)
$(ENV)/bin/pip install hatch
# Installs dev requirements to virtualenv
.PHONY: devenv
devenv: $(ENV)/bin/$(NAME)
# Generates a smaller env for running tox, which builds it's own env
.PHONY: test-env
test-env: $(ENV)/bin/tox
# Generates a small build env for building and uploading dists
.PHONY: build-env
build-env: $(ENV)/bin/twine $(ENV)/bin/wheel
# Runs package
.PHONY: run
run: $(ENV)/bin/$(NAME)
$(ENV)/bin/$(NAME)
# Runs tests with tox
# Runs tests for current python
.PHONY: test
test: $(ENV)/bin/tox
$(ENV)/bin/tox
test: $(ENV)/bin/hatch
$(ENV)/bin/hatch run +py=3 test:run
# Runs test matrix
.PHONY: test-matrix
test-matrix: $(ENV)/bin/hatch
$(ENV)/bin/hatch run test:run
# Builds wheel for package to upload
.PHONY: build
build: $(ENV)/bin/wheel
$(ENV)/bin/python setup.py sdist
$(ENV)/bin/python setup.py bdist_wheel
build: $(ENV)/bin/hatch
$(ENV)/bin/hatch build
# Verify that the python version matches the git tag so we don't push bad shas
.PHONY: verify-tag-version
verify-tag-version:
verify-tag-version: $(ENV)/bin/hatch
$(eval TAG_NAME = $(shell [ -n "$(DRONE_TAG)" ] && echo $(DRONE_TAG) || git describe --tags --exact-match))
test "v$(shell python setup.py -V)" = "$(TAG_NAME)"
test "v$(shell $(ENV)/bin/hatch version)" = "$(TAG_NAME)"
# Uses twine to upload to pypi
# Upload to pypi
.PHONY: upload
upload: verify-tag-version build $(ENV)/bin/twine
$(ENV)/bin/twine upload dist/*
upload: verify-tag-version build
$(ENV)/bin/hatch publish
# Uses twine to upload to test pypi
.PHONY: upload-test
upload-test: build $(ENV)/bin/twine
$(ENV)/bin/twine check dist/*
$(ENV)/bin/twine upload --skip-existing --repository-url https://test.pypi.org/legacy/ dist/*
upload-test: build
# Bump version to a post version based on num of commits since last tag to prevent overwriting
$(ENV)/bin/hatch version $(shell git describe --tags | sed 's/-[0-9a-z]*$$//')
$(ENV)/bin/hatch publish --repo test
# Cleans all build, runtime, and test artifacts
.PHONY: clean
clean:
rm -fr ./build *.egg-info ./htmlcov ./.coverage ./.pytest_cache ./.tox
rm -fr ./build *.egg-info ./htmlcov ./.coverage ./.pytest_cache
find . -name '*.pyc' -delete
find . -name '__pycache__' -delete
# Cleans dist and env
.PHONY: dist-clean
dist-clean: clean
-$(ENV)/bin/hatch env prune
rm -fr ./dist $(ENV)
# Run linters
.PHONY: lint
lint: $(ENV)/bin/hatch
$(ENV)/bin/hatch run lint:all
# Install pre-commit hooks
.PHONY: install-hooks
install-hooks: devenv
$(ENV)/bin/pre-commit install -f --install-hooks
$(ENV)/bin/hatch run lint:install-hooks
# Generates test coverage
.coverage:
$(ENV)/bin/tox
.coverage: test
# Builds coverage html
htmlcov/index.html: .coverage
$(ENV)/bin/coverage html
$(ENV)/bin/hatch run coverage html
# Opens coverage html in browser (on macOS and some Linux systems)
.PHONY: open-coverage
@ -107,7 +96,7 @@ docs-clean:
# Builds docs
docs/build/html/index.html:
$(ENV)/bin/tox -e docs
$(ENV)/bin/hatch run docs:build
# Shorthand for building docs
.PHONY: docs

View File

@ -13,6 +13,8 @@
# sys.path.insert(0, os.path.abspath('.'))
# -- Project information -----------------------------------------------------
from __future__ import annotations
project = "release-gitter"
copyright = "2021, iamthefij"
author = "iamthefij"

View File

@ -2,11 +2,13 @@
This builder functions as a pseudo builder that instead downloads and installs a binary file using
release-gitter based on a pyproject.toml file. It's a total hack...
"""
from __future__ import annotations
from dataclasses import dataclass
from pathlib import Path
from shutil import copy
from shutil import copytree
from shutil import move
import toml
from wheel.wheelfile import WheelFile
@ -15,50 +17,72 @@ import release_gitter as rg
from release_gitter import removeprefix
PACKAGE_NAME = "pseudo"
@dataclass
class Config:
name: str
format: str
git_url: str
hostname: str
owner: str
repo: str
version: str | None = None
pre_release: bool = False
version_git_tag: bool = False
version_git_no_fetch: bool = False
map_system: dict[str, str] | None = None
map_arch: dict[str, str] | None = None
exec: str | None = None
extract_all: bool = False
extract_files: list[str] | None = None
include_extra_files: list[str] | None = None
def download(config) -> list[Path]:
release = rg.fetch_release(
rg.GitRemoteInfo(config.hostname, config.owner, config.repo), config.version
)
asset = rg.match_asset(
release,
def download(config: Config, wheel_scripts: Path) -> list[Path]:
"""Download and extract files to the wheel_scripts directory"""
return rg.download_release(
rg.GitRemoteInfo(config.hostname, config.owner, config.repo),
wheel_scripts,
config.format,
version=config.version,
system_mapping=config.map_system,
arch_mapping=config.map_arch,
extract_files=config.extract_files,
pre_release=config.pre_release,
exec=config.exec,
)
files = rg.download_asset(asset, extract_files=config.extract_files)
# Optionally execute post command
if config.exec:
rg.check_call(config.exec, shell=True)
return files
def read_metadata():
config = toml.load("pyproject.toml").get("tool", {}).get("release-gitter")
if not config:
def read_metadata() -> Config:
"""Read configuration from pyproject.toml"""
pyproject = toml.load("pyproject.toml").get("tool", {}).get("release-gitter")
if not pyproject:
raise ValueError("Must have configuration in [tool.release-gitter]")
args = []
for key, value in config.items():
key = "--" + key
if key == "--format":
args += [value]
elif isinstance(value, dict):
for sub_key, sub_value in value.items():
args = [key, f"{sub_key}={sub_value}"] + args
elif isinstance(value, list):
for sub_value in value:
args = [key, sub_value] + args
else:
args = [key, value] + args
git_url = pyproject.pop("git-url", None)
remote_info = rg.parse_git_url(git_url)
return rg._parse_args(args)
config = Config(
name=pyproject.pop("name", remote_info.repo),
format=pyproject.pop("format"),
git_url=git_url,
hostname=pyproject.pop("hostname", remote_info.hostname),
owner=pyproject.pop("owner", remote_info.owner),
repo=pyproject.pop("repo", remote_info.repo),
)
for key, value in pyproject.items():
setattr(config, str(key).replace("-", "_"), value)
if config.version is None:
config.version = rg.read_version(
config.version_git_tag,
not config.version_git_no_fetch,
)
if config.extract_all:
config.extract_files = []
return config
class _PseudoBuildBackend:
@ -68,14 +92,14 @@ class _PseudoBuildBackend:
def prepare_metadata_for_build_wheel(
self, metadata_directory, config_settings=None
):
# Createa .dist-info directory containing wheel metadata inside metadata_directory. Eg {metadata_directory}/{package}-{version}.dist-info/
# Create a .dist-info directory containing wheel metadata inside metadata_directory. Eg {metadata_directory}/{package}-{version}.dist-info/
print("Prepare meta", metadata_directory, config_settings)
metadata = read_metadata()
version = removeprefix(metadata.version, "v")
version = removeprefix(metadata.version, "v") if metadata.version else "0.0.0"
# Returns distinfo dir?
dist_info = Path(metadata_directory) / f"{PACKAGE_NAME}-{version}.dist-info"
dist_info = Path(metadata_directory) / f"{metadata.name}-{version}.dist-info"
dist_info.mkdir()
# Write metadata
@ -84,7 +108,7 @@ class _PseudoBuildBackend:
"\n".join(
[
"Metadata-Version: 2.1",
f"Name: {PACKAGE_NAME}",
f"Name: {metadata.name}",
f"Version: {version}",
]
)
@ -116,31 +140,40 @@ class _PseudoBuildBackend:
def build_wheel(
self, wheel_directory, config_settings=None, metadata_directory=None
):
if metadata_directory is None:
raise ValueError("Cannot build wheel without metadata_directory")
metadata_directory = Path(metadata_directory)
metadata = read_metadata()
version = removeprefix(metadata.version, "v")
version = removeprefix(metadata.version, "v") if metadata.version else "0.0.0"
wheel_directory = Path(wheel_directory)
wheel_directory.mkdir(exist_ok=True)
wheel_scripts = wheel_directory / f"{PACKAGE_NAME}-{version}.data/scripts"
wheel_scripts = wheel_directory / f"{metadata.name}-{version}.data/scripts"
wheel_scripts.mkdir(parents=True, exist_ok=True)
copytree(metadata_directory, wheel_directory / metadata_directory.name)
metadata = read_metadata()
files = download(metadata)
for file in files:
move(file, wheel_scripts / file.name)
download(metadata, wheel_scripts)
print(f"ls {wheel_directory}: {list(wheel_directory.glob('*'))}")
for file_name in metadata.include_extra_files or []:
file = Path(file_name)
if Path.cwd() in file.absolute().parents:
copy(file_name, wheel_scripts / file)
else:
raise ValueError(
f"Cannot include any path that is not within the current directory: {file_name}"
)
wheel_filename = f"{PACKAGE_NAME}-{version}-py2.py3-none-any.whl"
print(f"ls {wheel_directory}: {list(wheel_directory.rglob('*'))}")
wheel_filename = f"{metadata.name}-{version}-py2.py3-none-any.whl"
with WheelFile(wheel_directory / wheel_filename, "w") as wf:
print("Repacking wheel as {}...".format(wheel_filename), end="")
# sys.stdout.flush()
wf.write_files(wheel_directory)
wf.write_files(str(wheel_directory))
return wheel_filename

46
pseudo_builder_test.py Normal file
View File

@ -0,0 +1,46 @@
from __future__ import annotations
import shutil
import subprocess
import venv
from pathlib import Path
from unittest import TestCase
ITEST_VENV_PATH = Path("venv-itest")
class TestPseudoBuilder(TestCase):
def setUp(self):
venv.create(
ITEST_VENV_PATH,
system_site_packages=False,
clear=True,
with_pip=True,
)
self.pip_install("-e", ".[builder]")
def tearDown(self):
shutil.rmtree(ITEST_VENV_PATH)
def pip_install(self, *args: str):
subprocess.run(
[str(ITEST_VENV_PATH.joinpath("bin", "pip")), "install", *args],
check=True,
)
def test_install_remote_package(self):
self.assertTrue(ITEST_VENV_PATH.exists())
self.assertTrue(ITEST_VENV_PATH.joinpath("bin", "python").exists())
self.assertTrue(ITEST_VENV_PATH.joinpath("bin", "pip").exists())
itest_packages = {
"stylua": "git+https://github.com/JohnnyMorganz/StyLua",
"selene": "git+https://github.com/amitds1997/selene",
}
for package, source in itest_packages.items():
self.pip_install("--no-index", "--no-build-isolation", source)
# Check if the package is installed
assert ITEST_VENV_PATH.joinpath("bin", package).exists()
# Check if the package has executable permissions
assert ITEST_VENV_PATH.joinpath("bin", package).stat().st_mode & 0o111

69
pyproject.toml Normal file
View File

@ -0,0 +1,69 @@
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[project]
name = "release-gitter"
dynamic = ["version"]
description = "Easily download releases from sites like Github and Gitea"
readme = "README.md"
license = "MIT"
classifiers = [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"License :: OSI Approved :: MIT License",
]
authors = [
{ name = "Ian Fijolek", email = "iamthefij@gmail.com" }
]
maintainers = [
{ name = "Ian Fijolek", email = "iamthefij@gmail.com" }
]
requires-python = ">=3.7"
dependencies = ["requests"]
[project.optional-dependencies]
builder = [
"toml",
"wheel",
]
[project.scripts]
release-gitter = "release_gitter:main"
[project.urls]
Homepage = "https://git.iamthefij.com/iamthefij/release-gitter"
[tool.hatch.version]
path = "release_gitter.py"
[tool.hatch.build]
include = ["release_gitter.py", "pseudo_builder.py"]
[tool.hatch.envs.test]
dependencies = [
"coverage",
]
[tool.hatch.envs.test.scripts]
run = [
"coverage erase",
"coverage run --source=release_gitter,pseudo_builder -m unittest discover -p '*_test.py'",
"coverage report -m # --fail-under 70",
]
[[tool.hatch.envs.test.matrix]]
python = ["3", "3.7", "3.8", "3.9", "3.10", "3.11", "3.12"]
[tool.hatch.envs.lint]
detached = true
dependencies = ["pre-commit"]
[tool.hatch.envs.lint.scripts]
all = "pre-commit run --all-files"
install-hooks = "pre-commit install --install-hooks"
[tool.isort]
add_imports = ["from __future__ import annotations"]
force_single_line = true
profile = "black"

View File

@ -2,10 +2,13 @@
from __future__ import annotations
import argparse
import logging
import platform
import tempfile
from collections.abc import Sequence
from dataclasses import dataclass
from io import BytesIO
from itertools import product
from mimetypes import guess_type
from pathlib import Path
from subprocess import check_call
@ -13,11 +16,21 @@ from subprocess import check_output
from tarfile import TarFile
from tarfile import TarInfo
from typing import Any
from typing import NamedTuple
from urllib.parse import urlparse
from zipfile import ZipFile
import requests
__version__ = "3.0.3"
logging.basicConfig(level=logging.WARNING)
class UnsupportedContentTypeError(ValueError):
pass
class InvalidRemoteError(ValueError):
pass
@ -41,6 +54,37 @@ def removesuffix(s: str, suf: str) -> str:
return s[: -len(suf)] if s and s.endswith(suf) else s
SYSTEM_SYNONYMS: list[list[str]] = [
["Darwin", "darwin", "MacOS", "macos", "macOS"],
["Windows", "windows", "win", "win32", "win64"],
["Linux", "linux"],
]
ARCH_SYNONYMS: list[list[str]] = [
["arm"],
["x86_64", "amd64", "AMD64"],
["arm64", "aarch64", "armv8b", "armv8l"],
["x86", "i386", "i686"],
]
def get_synonyms(value: str, thesaurus: list[list[str]]) -> list[str]:
"""Gets synonym list for a given value."""
results = [value]
for l in thesaurus:
if value in l:
results += l
return results
class MatchedValues(NamedTuple):
version: str
system: str
arch: str
@dataclass
class GitRemoteInfo:
"""Extracts information about a repository"""
@ -82,13 +126,13 @@ class GitRemoteInfo:
)
def parse_git_remote(git_url: str | None = None) -> GitRemoteInfo:
"""Extract Github repo info from a git remote url"""
if not git_url:
git_url = (
check_output(["git", "remote", "get-url", "origin"]).decode("UTF-8").strip()
)
def read_git_remote() -> str:
"""Reads the git remote url from the origin"""
return check_output(["git", "remote", "get-url", "origin"]).decode("UTF-8").strip()
def parse_git_url(git_url: str) -> GitRemoteInfo:
"""Extract Github repo info from a git remote url"""
# Normalize Github ssh url as a proper URL
if git_url.startswith("git@github.com:"):
git_ssh_parts = git_url.partition(":")
@ -135,6 +179,7 @@ def read_git_tag(fetch: bool = True) -> str | None:
def read_version(from_tags: bool = False, fetch: bool = False) -> str | None:
"""Read version information from file or from git"""
if from_tags:
logging.debug("Reading version from git tag")
return read_git_tag(fetch)
matchers = {
@ -144,17 +189,20 @@ def read_version(from_tags: bool = False, fetch: bool = False) -> str | None:
for name, extractor in matchers.items():
p = Path(name)
if p.exists():
logging.debug(f"Reading version from {p}")
return extractor(p)
# TODO: Log this out to stderr
# raise ValueError(f"Unknown project type. Didn't find any of {matchers.keys()}")
logging.warning(
"Unknown local project version. Didn't find any of %s", set(matchers.keys())
)
return None
def fetch_release(
remote: GitRemoteInfo,
version: str | None = None
# TODO: Accept an argument for pre-release
version: str | None = None,
pre_release=False,
) -> dict[Any, Any]:
"""Fetches a release object from a Github repo
@ -170,14 +218,25 @@ def fetch_release(
# Return the latest if requested
if version is None or version == "latest":
return result.json()[0]
logging.debug("Looking for latest release")
for release in result.json():
if release["prerelease"] and not pre_release:
continue
return release
# Return matching version
for release in result.json():
if release["tag_name"].endswith(version):
logging.debug(f"Found release {release['name']} matching version {version}")
return release
raise ValueError(f"Could not find release version ending in {version}")
raise ValueError(
f"Could not find release version ending in {version}."
f"{ ' Is it a pre-release?' if not pre_release else ''}"
)
def match_asset(
@ -186,7 +245,7 @@ def match_asset(
version: str | None = None,
system_mapping: dict[str, str] | None = None,
arch_mapping: dict[str, str] | None = None,
) -> dict[Any, Any]:
) -> tuple[dict[Any, Any], MatchedValues]:
"""Accepts a release and searches for an appropriate asset attached using
a provided template and some alternative mappings for version, system, and machine info
@ -222,37 +281,39 @@ def match_asset(
# This should never really happen
if version is None:
if "{version}" in format:
raise ValueError(
"No version provided or found in release name but is in format"
)
else:
# This should never happen, but since version isn't used anywhere, we can make it an empty string
version = ""
raise ValueError("No version provided or found in release name.")
system = platform.system()
if system_mapping:
system = system_mapping.get(system, system)
systems = [system_mapping.get(system, system)]
else:
systems = get_synonyms(system, SYSTEM_SYNONYMS)
arch = platform.machine()
if arch_mapping:
arch = arch_mapping.get(arch, arch)
archs = [arch_mapping.get(arch, arch)]
else:
archs = get_synonyms(arch, ARCH_SYNONYMS)
expected_names = {
format.format(
version=normalized_version,
system=system,
arch=arch,
)
for normalized_version in (
version.lstrip("v"),
"v" + version if not version.startswith("v") else version,
version=version_opt,
system=system_opt,
arch=arch_opt,
): MatchedValues(version=version_opt, system=system_opt, arch=arch_opt)
for version_opt, system_opt, arch_opt in product(
(
version.lstrip("v"),
"v" + version if not version.startswith("v") else version,
),
systems,
archs,
)
}
for asset in release["assets"]:
if asset["name"] in expected_names:
return asset
return (asset, expected_names[asset["name"]])
raise ValueError(
f"Could not find asset named {expected_names} on release {release['name']}"
@ -268,22 +329,32 @@ class PackageAdapter:
"application/zip",
"application/x-zip-compressed",
):
logging.debug("Opening zip file from response content")
self._package = ZipFile(BytesIO(response.content))
elif content_type == "application/x-tar":
logging.debug("Opening tar file from response content")
self._package = TarFile(fileobj=response.raw)
elif content_type in (
"application/gzip",
"application/x-tar+gzip",
"application/x-tar+xz",
"application/x-compressed-tar",
):
logging.debug("Opening compressed tar file from response content")
self._package = TarFile.open(fileobj=BytesIO(response.content), mode="r:*")
else:
raise ValueError(f"Unknown or unsupported content type {content_type}")
raise UnsupportedContentTypeError(
f"Unknown or unsupported content type {content_type}"
)
def get_names(self) -> list[str]:
"""Get list of all file names in package"""
if isinstance(self._package, ZipFile):
return self._package.namelist()
if isinstance(self._package, TarFile):
return self._package.getnames()
@ -301,20 +372,52 @@ class PackageAdapter:
If the `file_names` list is empty, all files will be extracted"""
if path is None:
path = Path.cwd()
if not members:
logging.debug("Extracting all members to %s", path)
self._package.extractall(path=path)
return self.get_names()
# TODO: Use walrus operator when dropping 3.7 support
missing_members = set(members) - set(self.get_names())
if missing_members:
raise ValueError(f"Missing members: {missing_members}")
logging.debug("Extracting members %s to %s", members, path)
if isinstance(self._package, ZipFile):
self._package.extractall(path=path, members=members)
if isinstance(self._package, TarFile):
self._package.extractall(
path=path, members=(TarInfo(name) for name in members)
path=path, members=(self._package.getmember(name) for name in members)
)
return members
def get_asset_package(
asset: dict[str, Any], result: requests.Response
) -> PackageAdapter:
possible_content_types = (
asset.get("content_type"),
"+".join(t for t in guess_type(asset["name"]) if t is not None),
)
for content_type in possible_content_types:
if not content_type:
continue
try:
return PackageAdapter(content_type, result)
except UnsupportedContentTypeError:
continue
else:
raise UnsupportedContentTypeError(
f"Cannot extract files from archive because we don't recognize the content types {possible_content_types}"
)
def download_asset(
asset: dict[Any, Any],
extract_files: list[str] | None = None,
@ -337,19 +440,11 @@ def download_asset(
result = requests.get(asset["browser_download_url"])
content_type = asset.get(
"content_type",
guess_type(asset["name"]),
)
if extract_files is not None:
if isinstance(content_type, tuple):
content_type = "+".join(t for t in content_type if t is not None)
if not content_type:
raise TypeError(
"Cannot extract files from archive because we don't recognize the content type"
)
package = PackageAdapter(content_type, result)
logging.info("Extracting package %s", asset["name"])
package = get_asset_package(asset, result)
extract_files = package.extractall(path=destination, members=extract_files)
return [destination / name for name in extract_files]
file_name = destination / asset["name"]
@ -396,6 +491,7 @@ class MapAddAction(argparse.Action):
def _parse_args(args: list[str] | None = None) -> argparse.Namespace:
logging.debug("Parsing arguments: %s", args)
parser = argparse.ArgumentParser()
parser.add_argument(
"format",
@ -409,6 +505,9 @@ def _parse_args(args: list[str] | None = None) -> argparse.Namespace:
default=Path.cwd(),
help="Destination directory. Defaults to current directory",
)
parser.add_argument(
"-v", action="count", help="verbose or debug logging", default=0
)
parser.add_argument(
"--hostname",
help="Git repository hostname",
@ -427,7 +526,12 @@ def _parse_args(args: list[str] | None = None) -> argparse.Namespace:
)
parser.add_argument(
"--version",
help="Release version to download. If not provied, it will look for project metadata",
help="Release version to download. If not provided, it will look for project metadata",
)
parser.add_argument(
"--prerelease",
action="store_true",
help="Include pre-release versions in search",
)
parser.add_argument(
"--version-git-tag",
@ -461,25 +565,42 @@ def _parse_args(args: list[str] | None = None) -> argparse.Namespace:
"--extract-files",
"-e",
action="append",
help="A list of file names to extract from downloaded archive",
help="A list of file names to extract from the downloaded archive",
)
parser.add_argument(
"--extract-all",
"-x",
action="store_true",
help="Shell commands to execute after download or extraction",
help="Extract all files from the downloaded archive",
)
parser.add_argument(
"--url-only",
action="store_true",
help="Only print the URL and do not download",
)
parser.add_argument(
"--use-temp-dir",
action="store_true",
help="Use a temporary directory as the destination",
)
parsed_args = parser.parse_args(args)
# Merge in fields from args and git remote
if not all((parsed_args.owner, parsed_args.repo, parsed_args.hostname)):
remote_info = parse_git_remote(parsed_args.git_url)
# Check to see if a git url was provided. If not, we use local directory git remote
if parsed_args.git_url is None:
parsed_args.git_url = read_git_remote()
# If using a local repo, try to determine version from project files
if parsed_args.version is None:
parsed_args.version = read_version(
parsed_args.version_git_tag,
not parsed_args.version_git_no_fetch,
)
# Get parts from git url
remote_info = parse_git_url(parsed_args.git_url)
def merge_field(a, b, field):
value = getattr(a, field)
@ -489,15 +610,12 @@ def _parse_args(args: list[str] | None = None) -> argparse.Namespace:
for field in ("owner", "repo", "hostname"):
merge_field(parsed_args, remote_info, field)
if parsed_args.version is None:
parsed_args.version = read_version(
parsed_args.version_git_tag,
not parsed_args.version_git_no_fetch,
)
if parsed_args.extract_all:
parsed_args.extract_files = []
if parsed_args.use_temp_dir:
parsed_args.destination = Path(tempfile.mkdtemp())
return parsed_args
@ -509,46 +627,100 @@ def download_release(
system_mapping: dict[str, str] | None = None,
arch_mapping: dict[str, str] | None = None,
extract_files: list[str] | None = None,
pre_release=False,
exec: str | None = None,
) -> list[Path]:
"""Convenience method for fetching, downloading and extracting a release"""
release = fetch_release(remote_info)
asset = match_asset(
"""Convenience method for fetching, downloading, and extracting a release
This is slightly different than running off the commandline, it will execute the shell script
from the destination directory, not the current working directory.
"""
release = fetch_release(
remote_info,
version=version,
pre_release=pre_release,
)
asset, matched_values = match_asset(
release,
format,
version=version,
system_mapping=system_mapping,
arch_mapping=arch_mapping,
)
format_fields = dict(
asset_name=asset["name"],
**matched_values._asdict(),
)
formatted_files = (
[file.format(**format_fields) for file in extract_files]
if extract_files is not None
else None
)
files = download_asset(
asset,
extract_files=extract_files,
extract_files=formatted_files,
destination=destination,
)
if exec:
check_call(
exec.format(asset["name"], **format_fields), shell=True, cwd=destination
)
return files
def main():
args = _parse_args()
logging.getLogger().setLevel(30 - 10 * args.v)
# Fetch the release
release = fetch_release(
GitRemoteInfo(args.hostname, args.owner, args.repo), args.version
GitRemoteInfo(args.hostname, args.owner, args.repo),
version=args.version,
pre_release=args.prerelease,
)
asset = match_asset(
logging.debug("Found release: %s", release["name"])
version = args.version or release["tag_name"]
logging.debug("Release version: %s", version)
# Find the asset to download using mapping rules
asset, matched_values = match_asset(
release,
args.format,
version=args.version,
version=version,
system_mapping=args.map_system,
arch_mapping=args.map_arch,
)
logging.info(f"Downloading {asset['name']} from release {release['name']}")
if args.url_only:
print(asset["browser_download_url"])
return
format_fields = dict(
asset_name=asset["name"],
**matched_values._asdict(),
)
# Format files to extract with version info, as this is sometimes included
formatted_files = (
[file.format(**format_fields) for file in args.extract_files]
if args.extract_files is not None
else None
)
files = download_asset(
asset,
extract_files=args.extract_files,
extract_files=formatted_files,
destination=args.destination,
)
@ -556,7 +728,11 @@ def main():
# Optionally execute post command
if args.exec:
check_call(args.exec.format(asset["name"]), shell=True)
check_call(
args.exec.format(asset["name"], **format_fields),
shell=True,
cwd=args.destination,
)
if __name__ == "__main__":

View File

@ -1,14 +1,16 @@
from __future__ import annotations
import unittest
from pathlib import Path
from itertools import chain
from itertools import product
from tarfile import TarFile
from typing import Any
from typing import Callable
from typing import NamedTuple
from typing import Optional
from unittest.mock import MagicMock
from unittest.mock import mock_open
from unittest.mock import patch
from zipfile import ZipFile
import requests
@ -20,10 +22,11 @@ class TestExpression(NamedTuple):
args: list[Any]
kwargs: dict[str, Any]
expected: Any
exception: Optional[type[Exception]] = None
exception: type[Exception] | None = None
msg: str | None = None
def run(self, f: Callable):
with self.t.subTest(f=f, args=self.args, kwargs=self.kwargs):
with self.t.subTest(msg=self.msg, f=f, args=self.args, kwargs=self.kwargs):
try:
result = f(*self.args, **self.kwargs)
self.t.assertIsNone(
@ -79,7 +82,7 @@ class TestRemoteInfo(unittest.TestCase):
release_gitter.InvalidRemoteError,
),
):
test_case.run(release_gitter.parse_git_remote)
test_case.run(release_gitter.parse_git_url)
def test_generate_release_url(self):
for subtest in (
@ -141,5 +144,339 @@ class TestVersionInfo(unittest.TestCase):
release_gitter.read_version()
@patch("release_gitter.ZipFile", autospec=True)
@patch("release_gitter.BytesIO", autospec=True)
class TestContentTypeDetection(unittest.TestCase):
def test_asset_encoding_priority(self, *_):
package = release_gitter.get_asset_package(
{
"content_type": "application/x-tar",
"name": "test.zip",
},
MagicMock(spec=["raw", "content"]),
)
# Tar should take priority over the file name zip extension
self.assertIsInstance(package._package, TarFile)
def test_fallback_to_supported_encoding(self, *_):
package = release_gitter.get_asset_package(
{
"content_type": "application/octetstream",
"name": "test.zip",
},
MagicMock(spec=["raw", "content"]),
)
# Should fall back to zip extension
self.assertIsInstance(package._package, ZipFile)
def test_missing_only_name_content_type(self, *_):
package = release_gitter.get_asset_package(
{
"name": "test.zip",
},
MagicMock(spec=["raw", "content"]),
)
# Should fall back to zip extension
self.assertIsInstance(package._package, ZipFile)
def test_no_content_types(self, *_):
with self.assertRaises(release_gitter.UnsupportedContentTypeError):
release_gitter.get_asset_package(
{
"name": "test",
},
MagicMock(spec=["raw", "content"]),
)
def test_no_supported_content_types(self, *_):
with self.assertRaises(release_gitter.UnsupportedContentTypeError):
release_gitter.get_asset_package(
{
"content_type": "application/octetstream",
"name": "test",
},
MagicMock(spec=["raw", "content"]),
)
def first_result(f):
def wrapper(*args, **kwargs):
return f(*args, **kwargs)[0]
return wrapper
class TestMatchAsset(unittest.TestCase):
def test_match_asset_versions(self, *_):
# Input variations:
# Case 1: Version provided with prefix
# Case 2: Version provided without prefix
# Case 3: No version provided, tag exists in release
# These should be impossible
# Case 4: No version provided, tag doesn't exist in release but not in template
# Case 5: No version provided, tag doesn't exist in release and is in template
# Release variations:
# Case 1: tag_name with version prefix
# Case 2: tag_name without version prefix
# File variations:
# Case 1: file name with version prefix
# Case 2: file name without version prefix
def new_expression(version: str | None, tag_name: str, file_name: str):
release = {"tag_name": tag_name, "assets": [{"name": file_name}]}
expected = {"name": file_name}
return TestExpression(
self, [release, "file-{version}.zip", version], {}, expected
)
happy_cases = [
new_expression(version, tag_name, file_name)
for version, tag_name, file_name in product(
("v1.0.0", "1.0.0", None),
("v1.0.0", "1.0.0"),
("file-v1.0.0.zip", "file-1.0.0.zip"),
)
]
for test_case in happy_cases:
test_case.run(first_result(release_gitter.match_asset))
def test_match_asset_systems(self, *_):
# Input variations:
# Case 1: System mapping provided
# Case 2: No system mapping provided
# Test: We want to show that default matching will work out of the box with some values for the current machine
# Test: We want to show that non-standard mappings will always work if provided manually
def run_with_context(actual_system: str, *args, **kwargs):
with patch("platform.system", return_value=actual_system):
return release_gitter.match_asset(*args, **kwargs)
def new_expression(
actual_system: str,
system_mapping: dict[str, str] | None,
file_name: str,
expected: dict[str, str],
exception: type[Exception] | None = None,
msg: str | None = None,
):
release = {
"name": "v1.0.0",
"tag_name": "v1.0.0",
"assets": [{"name": file_name}],
}
return TestExpression(
self,
[actual_system, release, "file-{system}.zip"],
{"system_mapping": system_mapping},
expected,
exception,
msg,
)
test_cases = chain(
[
new_expression(
"Earth",
None,
"file-Earth.zip",
{"name": "file-Earth.zip"},
msg="Current system always included as an exact match synonym",
),
new_expression(
"Linux",
{"Linux": "jumanji"},
"file-jumanji.zip",
{"name": "file-jumanji.zip"},
msg="Non-standard system mapping works",
),
new_expression(
"Linux",
{},
"file-darwin.zip",
{},
ValueError,
msg="No matching system",
),
],
# Test default mappings
(
new_expression(
actual_system,
None,
file_name,
{"name": file_name},
msg="Default Linux mappings",
)
for actual_system, file_name in product(
("Linux", "linux"),
("file-Linux.zip", "file-linux.zip"),
)
),
(
new_expression(
actual_system,
None,
file_name,
{"name": file_name},
msg="Default macOS mappings",
)
for actual_system, file_name in product(
("Darwin", "darwin", "MacOS", "macos", "macOS"),
(
"file-Darwin.zip",
"file-darwin.zip",
"file-MacOS.zip",
"file-macos.zip",
),
)
),
(
new_expression(
actual_system,
None,
file_name,
{"name": file_name},
msg="Default Windows mappings",
)
for actual_system, file_name in product(
("Windows", "windows", "win", "win32", "win64"),
(
"file-Windows.zip",
"file-windows.zip",
"file-win.zip",
"file-win32.zip",
"file-win64.zip",
),
)
),
)
for test_case in test_cases:
test_case.run(first_result(run_with_context))
def test_match_asset_archs(self, *_):
# Input variations:
# Case 1: Arch mapping provided
# Case 2: No arch mapping provided
# Test: We want to show that default matching will work out of the box with some values for the current machine
# Test: We want to show that non-standard mappings will always work if provided manually
def run_with_context(actual_arch: str, *args, **kwargs):
with patch("platform.machine", return_value=actual_arch):
return release_gitter.match_asset(*args, **kwargs)
def new_expression(
actual_arch: str,
arch_mapping: dict[str, str] | None,
file_name: str,
expected: dict[str, str],
exception: type[Exception] | None = None,
msg: str | None = None,
):
release = {
"name": "v1.0.0",
"tag_name": "v1.0.0",
"assets": [{"name": file_name}],
}
return TestExpression(
self,
[actual_arch, release, "file-{arch}.zip"],
{"arch_mapping": arch_mapping},
expected,
exception,
msg,
)
test_cases = chain(
[
new_expression(
"Earth",
None,
"file-Earth.zip",
{"name": "file-Earth.zip"},
msg="Current arch always included as an exact match synonym",
),
new_expression(
"x86_64",
{"x86_64": "jumanji"},
"file-jumanji.zip",
{"name": "file-jumanji.zip"},
msg="Non-standard arch mapping works",
),
new_expression(
"x86_64",
{},
"file-arm.zip",
{},
ValueError,
msg="No matching arch",
),
],
# Test default mappings
(
new_expression(
actual_arch,
None,
file_name,
{"name": file_name},
msg="Default arm mappings",
)
for actual_arch, file_name in product(
("arm",),
("file-arm.zip",),
)
),
(
new_expression(
actual_arch,
None,
file_name,
{"name": file_name},
msg="Default amd64 mappings",
)
for actual_arch, file_name in product(
("amd64", "x86_64", "AMD64"),
("file-amd64.zip", "file-x86_64.zip"),
)
),
(
new_expression(
actual_arch,
None,
file_name,
{"name": file_name},
msg="Default arm64 mappings",
)
for actual_arch, file_name in product(
("arm64", "aarch64", "armv8b", "armv8l"),
(
"file-arm64.zip",
"file-aarch64.zip",
"file-armv8b.zip",
"file-armv8l.zip",
),
)
),
(
new_expression(
actual_arch,
None,
file_name,
{"name": file_name},
msg="Default x86 mappings",
)
for actual_arch, file_name in product(
("x86", "i386", "i686"),
("file-x86.zip", "file-i386.zip", "file-i686.zip"),
)
),
)
for test_case in test_cases:
test_case.run(first_result(run_with_context))
if __name__ == "__main__":
unittest.main()

View File

@ -1,4 +1,6 @@
-e .
pytest
coverage
hatch
mypy
pre-commit
types-requests
types-toml

View File

@ -11,6 +11,7 @@ version = "0.11.3"
extract-files = [ "stylua" ]
format = "stylua-{version}-{system}.zip"
exec = "chmod +x stylua"
[tool.release-gitter.map-system]
Darwin = "macos"
Windows = "win64"

View File

@ -1,40 +0,0 @@
from codecs import open
from os import path
from setuptools import find_packages
from setuptools import setup
here = path.abspath(path.dirname(__file__))
# Get the long description from the README file
with open(path.join(here, "README.md"), encoding="utf-8") as f:
long_description = f.read()
setup(
name="release-gitter",
version="1.1.3",
description="Easily download releases from sites like Github and Gitea",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://git.iamthefij.com/iamthefij/release-gitter.git",
download_url=(
"https://git.iamthefij.com/iamthefij/release-gitter.git/archive/master.tar.gz"
),
author="iamthefij",
author_email="",
classifiers=[
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
],
keywords="",
py_modules=["release_gitter", "pseudo_builder"],
install_requires=["requests"],
extras_require={"builder": ["toml", "wheel"]},
entry_points={
"console_scripts": [
"release-gitter=release_gitter:main",
],
},
)

17
tox.ini
View File

@ -1,17 +0,0 @@
[tox]
envlist = py3,py37,py38,py39,py310
[testenv]
deps =
-rrequirements-dev.txt
commands =
coverage erase
coverage run --source=release_gitter -m unittest discover . {posargs:"*_test.py"}
coverage report -m # --fail-under 70
pre-commit run --all-files
[testenv:pre-commit]
deps =
pre-commit
commands =
pre-commit {posargs}