Warning: Permanently added '54.167.64.127' (ED25519) to the list of known hosts. You can reproduce this build on your computer by running: sudo dnf install copr-rpmbuild /usr/bin/copr-rpmbuild --verbose --drop-resultdir --task-url https://copr.fedorainfracloud.org/backend/get-build-task/8715311-fedora-41-x86_64 --chroot fedora-41-x86_64 Version: 1.2 PID: 9616 Logging PID: 9617 Task: {'allow_user_ssh': False, 'appstream': False, 'background': True, 'build_id': 8715311, 'buildroot_pkgs': [], 'chroot': 'fedora-41-x86_64', 'enable_net': False, 'fedora_review': False, 'git_hash': '088ac66daebd6b76699551b06204e829bfb78533', 'git_repo': 'https://copr-dist-git.fedorainfracloud.org/git/@copr/PyPI/python-tapyoca', 'isolation': 'default', 'memory_reqs': 2048, 'package_name': 'python-tapyoca', 'package_version': '0.0.4-1', 'project_dirname': 'PyPI', 'project_name': 'PyPI', 'project_owner': '@copr', 'repo_priority': None, 'repos': [{'baseurl': 'https://download.copr.fedorainfracloud.org/results/@copr/PyPI/fedora-41-x86_64/', 'id': 'copr_base', 'name': 'Copr repository', 'priority': None}], 'sandbox': '@copr/PyPI--ksurma', 'source_json': {}, 'source_type': None, 'ssh_public_keys': None, 'storage': None, 'submitter': 'ksurma', 'tags': [], 'task_id': '8715311-fedora-41-x86_64', 'timeout': 18000, 'uses_devel_repo': False, 'with_opts': [], 'without_opts': []} Running: git clone https://copr-dist-git.fedorainfracloud.org/git/@copr/PyPI/python-tapyoca /var/lib/copr-rpmbuild/workspace/workdir-a8jvyids/python-tapyoca --depth 500 --no-single-branch --recursive cmd: ['git', 'clone', 'https://copr-dist-git.fedorainfracloud.org/git/@copr/PyPI/python-tapyoca', '/var/lib/copr-rpmbuild/workspace/workdir-a8jvyids/python-tapyoca', '--depth', '500', '--no-single-branch', '--recursive'] cwd: . rc: 0 stdout: stderr: Cloning into '/var/lib/copr-rpmbuild/workspace/workdir-a8jvyids/python-tapyoca'... Running: git checkout 088ac66daebd6b76699551b06204e829bfb78533 -- cmd: ['git', 'checkout', '088ac66daebd6b76699551b06204e829bfb78533', '--'] cwd: /var/lib/copr-rpmbuild/workspace/workdir-a8jvyids/python-tapyoca rc: 0 stdout: stderr: Note: switching to '088ac66daebd6b76699551b06204e829bfb78533'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by switching back to a branch. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -c with the switch command. Example: git switch -c Or undo this operation with: git switch - Turn off this advice by setting config variable advice.detachedHead to false HEAD is now at 088ac66 automatic import of python-tapyoca Running: dist-git-client sources cmd: ['dist-git-client', 'sources'] cwd: /var/lib/copr-rpmbuild/workspace/workdir-a8jvyids/python-tapyoca rc: 0 stdout: stderr: INFO: Reading stdout from command: git rev-parse --abbrev-ref HEAD INFO: Reading stdout from command: git rev-parse HEAD INFO: Reading sources specification file: sources INFO: Downloading tapyoca-0.0.4.tar.gz INFO: Reading stdout from command: curl --help all INFO: Calling: curl -H Pragma: -o tapyoca-0.0.4.tar.gz --location --connect-timeout 60 --retry 3 --retry-delay 10 --remote-time --show-error --fail --retry-all-errors https://copr-dist-git.fedorainfracloud.org/repo/pkgs/@copr/PyPI/python-tapyoca/tapyoca-0.0.4.tar.gz/md5/cb8449a5a6c8f33605c653a47dba8f88/tapyoca-0.0.4.tar.gz % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 96711 100 96711 0 0 7062k 0 --:--:-- --:--:-- --:--:-- 7264k INFO: Reading stdout from command: md5sum tapyoca-0.0.4.tar.gz /usr/bin/tail: /var/lib/copr-rpmbuild/main.log: file truncated Running (timeout=18000): unbuffer mock --spec /var/lib/copr-rpmbuild/workspace/workdir-a8jvyids/python-tapyoca/python-tapyoca.spec --sources /var/lib/copr-rpmbuild/workspace/workdir-a8jvyids/python-tapyoca --resultdir /var/lib/copr-rpmbuild/results --uniqueext 1740863228.713984 -r /var/lib/copr-rpmbuild/results/configs/child.cfg INFO: mock.py version 6.0 starting (python version = 3.13.0, NVR = mock-6.0-1.fc41), args: /usr/libexec/mock/mock --spec /var/lib/copr-rpmbuild/workspace/workdir-a8jvyids/python-tapyoca/python-tapyoca.spec --sources /var/lib/copr-rpmbuild/workspace/workdir-a8jvyids/python-tapyoca --resultdir /var/lib/copr-rpmbuild/results --uniqueext 1740863228.713984 -r /var/lib/copr-rpmbuild/results/configs/child.cfg Start(bootstrap): init plugins INFO: tmpfs initialized INFO: selinux enabled INFO: chroot_scan: initialized INFO: compress_logs: initialized Finish(bootstrap): init plugins Start: init plugins INFO: tmpfs initialized INFO: selinux enabled INFO: chroot_scan: initialized INFO: compress_logs: initialized Finish: init plugins INFO: Signal handler active Start: run INFO: Start(/var/lib/copr-rpmbuild/workspace/workdir-a8jvyids/python-tapyoca/python-tapyoca.spec) Config(fedora-41-x86_64) Start: clean chroot Finish: clean chroot Mock Version: 6.0 INFO: Mock Version: 6.0 Start(bootstrap): chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-41-x86_64-bootstrap-1740863228.713984/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled package manager cache Start(bootstrap): cleaning package manager metadata Finish(bootstrap): cleaning package manager metadata INFO: Guessed host environment type: unknown INFO: Using container image: registry.fedoraproject.org/fedora:41 INFO: Pulling image: registry.fedoraproject.org/fedora:41 INFO: Tagging container image as mock-bootstrap-7d5dfecb-5796-4b1b-86ff-53376ef5b41e INFO: Checking that 3c6d584f74ca54f6f15eeca09fa68980b95a6495180a3b555b07dbe703a4f7cc image matches host's architecture INFO: Copy content of container 3c6d584f74ca54f6f15eeca09fa68980b95a6495180a3b555b07dbe703a4f7cc to /var/lib/mock/fedora-41-x86_64-bootstrap-1740863228.713984/root INFO: mounting 3c6d584f74ca54f6f15eeca09fa68980b95a6495180a3b555b07dbe703a4f7cc with podman image mount INFO: image 3c6d584f74ca54f6f15eeca09fa68980b95a6495180a3b555b07dbe703a4f7cc as /var/lib/containers/storage/overlay/78abfe0bb6597fe1bbc72d94ba61d9bea5d50210b158df7e85d0652c301e9bc6/merged INFO: umounting image 3c6d584f74ca54f6f15eeca09fa68980b95a6495180a3b555b07dbe703a4f7cc (/var/lib/containers/storage/overlay/78abfe0bb6597fe1bbc72d94ba61d9bea5d50210b158df7e85d0652c301e9bc6/merged) with podman image umount INFO: Removing image mock-bootstrap-7d5dfecb-5796-4b1b-86ff-53376ef5b41e INFO: Package manager dnf5 detected and used (fallback) INFO: Not updating bootstrap chroot, bootstrap_image_ready=True Start(bootstrap): creating root cache Finish(bootstrap): creating root cache Finish(bootstrap): chroot init Start: chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-41-x86_64-1740863228.713984/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled package manager cache Start: cleaning package manager metadata Finish: cleaning package manager metadata INFO: enabled HW Info plugin INFO: Package manager dnf5 detected and used (direct choice) INFO: Buildroot is handled by package management downloaded with a bootstrap image: rpm-4.20.0-1.fc41.x86_64 rpm-sequoia-1.7.0-5.fc41.x86_64 dnf5-5.2.10.0-2.fc41.x86_64 dnf5-plugins-5.2.10.0-2.fc41.x86_64 Start: installing minimal buildroot with dnf5 Updating and loading repositories: updates 100% | 36.7 MiB/s | 10.8 MiB | 00m00s fedora 100% | 52.9 MiB/s | 35.3 MiB | 00m01s Copr repository 100% | 193.1 MiB/s | 42.1 MiB | 00m00s Repositories loaded. Package Arch Version Repository Size Installing group/module packages: bash x86_64 5.2.32-1.fc41 fedora 8.2 MiB bzip2 x86_64 1.0.8-19.fc41 fedora 95.7 KiB coreutils x86_64 9.5-11.fc41 updates 5.7 MiB cpio x86_64 2.15-2.fc41 fedora 1.1 MiB diffutils x86_64 3.10-8.fc41 fedora 1.6 MiB fedora-release-common noarch 41-29 updates 19.7 KiB findutils x86_64 1:4.10.0-4.fc41 fedora 1.8 MiB gawk x86_64 5.3.0-4.fc41 fedora 1.7 MiB glibc-minimal-langpack x86_64 2.40-21.fc41 updates 0.0 B grep x86_64 3.11-9.fc41 fedora 1.0 MiB gzip x86_64 1.13-2.fc41 fedora 389.0 KiB info x86_64 7.1-3.fc41 fedora 361.8 KiB patch x86_64 2.7.6-25.fc41 fedora 266.7 KiB redhat-rpm-config noarch 293-1.fc41 fedora 183.5 KiB rpm-build x86_64 4.20.0-1.fc41 fedora 194.3 KiB sed x86_64 4.9-3.fc41 fedora 861.5 KiB shadow-utils x86_64 2:4.15.1-12.fc41 fedora 4.1 MiB tar x86_64 2:1.35-4.fc41 fedora 2.9 MiB unzip x86_64 6.0-64.fc41 fedora 386.8 KiB util-linux x86_64 2.40.4-1.fc41 updates 3.6 MiB which x86_64 2.21-42.fc41 fedora 80.2 KiB xz x86_64 1:5.6.2-2.fc41 fedora 1.2 MiB Installing dependencies: add-determinism x86_64 0.3.6-3.fc41 updates 2.4 MiB alternatives x86_64 1.31-1.fc41 updates 64.8 KiB ansible-srpm-macros noarch 1-16.fc41 fedora 35.7 KiB audit-libs x86_64 4.0.3-1.fc41 updates 351.3 KiB authselect x86_64 1.5.0-8.fc41 fedora 157.6 KiB authselect-libs x86_64 1.5.0-8.fc41 fedora 822.2 KiB basesystem noarch 11-21.fc41 fedora 0.0 B binutils x86_64 2.43.1-5.fc41 updates 27.4 MiB build-reproducibility-srpm-macros noarch 0.3.6-3.fc41 updates 735.0 B bzip2-libs x86_64 1.0.8-19.fc41 fedora 80.7 KiB ca-certificates noarch 2024.2.69_v8.0.401-1.0.fc41 fedora 2.4 MiB coreutils-common x86_64 9.5-11.fc41 updates 11.2 MiB cracklib x86_64 2.9.11-6.fc41 fedora 238.9 KiB crypto-policies noarch 20250124-1.git4d262e7.fc41 updates 137.4 KiB curl x86_64 8.9.1-3.fc41 updates 793.5 KiB cyrus-sasl-lib x86_64 2.1.28-27.fc41 fedora 2.3 MiB debugedit x86_64 5.1-4.fc41 updates 197.7 KiB dwz x86_64 0.15-8.fc41 fedora 298.9 KiB ed x86_64 1.20.2-2.fc41 fedora 146.9 KiB efi-srpm-macros noarch 5-13.fc41 updates 40.2 KiB elfutils x86_64 0.192-9.fc41 updates 2.7 MiB elfutils-debuginfod-client x86_64 0.192-9.fc41 updates 84.2 KiB elfutils-default-yama-scope noarch 0.192-9.fc41 updates 1.8 KiB elfutils-libelf x86_64 0.192-9.fc41 updates 1.2 MiB elfutils-libs x86_64 0.192-9.fc41 updates 670.2 KiB fedora-gpg-keys noarch 41-1 fedora 126.4 KiB fedora-release noarch 41-29 updates 0.0 B fedora-release-identity-basic noarch 41-29 updates 682.0 B fedora-repos noarch 41-1 fedora 4.9 KiB file x86_64 5.45-7.fc41 fedora 103.5 KiB file-libs x86_64 5.45-7.fc41 fedora 9.9 MiB filesystem x86_64 3.18-23.fc41 fedora 106.0 B fonts-srpm-macros noarch 1:2.0.5-17.fc41 fedora 55.8 KiB forge-srpm-macros noarch 0.4.0-1.fc41 updates 38.9 KiB fpc-srpm-macros noarch 1.3-13.fc41 fedora 144.0 B gdb-minimal x86_64 16.2-1.fc41 updates 13.3 MiB gdbm x86_64 1:1.23-7.fc41 fedora 460.9 KiB gdbm-libs x86_64 1:1.23-7.fc41 fedora 121.9 KiB ghc-srpm-macros noarch 1.9.1-2.fc41 fedora 747.0 B glibc x86_64 2.40-21.fc41 updates 6.7 MiB glibc-common x86_64 2.40-21.fc41 updates 1.0 MiB glibc-gconv-extra x86_64 2.40-21.fc41 updates 7.9 MiB gmp x86_64 1:6.3.0-2.fc41 fedora 811.4 KiB gnat-srpm-macros noarch 6-6.fc41 fedora 1.0 KiB go-srpm-macros noarch 3.6.0-5.fc41 updates 60.8 KiB jansson x86_64 2.13.1-10.fc41 fedora 88.3 KiB json-c x86_64 0.17-4.fc41 fedora 82.4 KiB kernel-srpm-macros noarch 1.0-24.fc41 fedora 1.9 KiB keyutils-libs x86_64 1.6.3-4.fc41 fedora 54.4 KiB krb5-libs x86_64 1.21.3-4.fc41 updates 2.3 MiB libacl x86_64 2.3.2-2.fc41 fedora 40.0 KiB libarchive x86_64 3.7.4-4.fc41 updates 926.6 KiB libattr x86_64 2.5.2-4.fc41 fedora 28.5 KiB libblkid x86_64 2.40.4-1.fc41 updates 257.2 KiB libbrotli x86_64 1.1.0-5.fc41 fedora 837.6 KiB libcap x86_64 2.70-4.fc41 fedora 220.2 KiB libcap-ng x86_64 0.8.5-3.fc41 fedora 69.2 KiB libcom_err x86_64 1.47.1-6.fc41 fedora 67.2 KiB libcurl x86_64 8.9.1-3.fc41 updates 809.3 KiB libeconf x86_64 0.6.2-3.fc41 fedora 58.0 KiB libevent x86_64 2.1.12-14.fc41 fedora 895.7 KiB libfdisk x86_64 2.40.4-1.fc41 updates 356.4 KiB libffi x86_64 3.4.6-3.fc41 fedora 86.4 KiB libgcc x86_64 14.2.1-7.fc41 updates 270.9 KiB libgomp x86_64 14.2.1-7.fc41 updates 514.2 KiB libidn2 x86_64 2.3.7-2.fc41 fedora 329.1 KiB libmount x86_64 2.40.4-1.fc41 updates 348.8 KiB libnghttp2 x86_64 1.62.1-2.fc41 fedora 166.1 KiB libnsl2 x86_64 2.0.1-2.fc41 fedora 57.9 KiB libpkgconf x86_64 2.3.0-1.fc41 fedora 78.2 KiB libpsl x86_64 0.21.5-4.fc41 fedora 80.5 KiB libpwquality x86_64 1.4.5-11.fc41 fedora 417.8 KiB libselinux x86_64 3.7-5.fc41 fedora 181.0 KiB libsemanage x86_64 3.7-2.fc41 fedora 293.5 KiB libsepol x86_64 3.7-2.fc41 fedora 817.8 KiB libsmartcols x86_64 2.40.4-1.fc41 updates 176.2 KiB libssh x86_64 0.10.6-8.fc41 fedora 513.3 KiB libssh-config noarch 0.10.6-8.fc41 fedora 277.0 B libstdc++ x86_64 14.2.1-7.fc41 updates 2.7 MiB libtasn1 x86_64 4.20.0-1.fc41 updates 180.4 KiB libtirpc x86_64 1.3.6-1.rc3.fc41 updates 197.6 KiB libtool-ltdl x86_64 2.4.7-12.fc41 fedora 66.2 KiB libunistring x86_64 1.1-8.fc41 fedora 1.7 MiB libutempter x86_64 1.2.1-15.fc41 fedora 57.7 KiB libuuid x86_64 2.40.4-1.fc41 updates 39.9 KiB libverto x86_64 0.3.2-9.fc41 fedora 29.5 KiB libxcrypt x86_64 4.4.38-6.fc41 updates 288.5 KiB libxml2 x86_64 2.12.9-1.fc41 updates 1.7 MiB libzstd x86_64 1.5.7-1.fc41 updates 804.0 KiB lua-libs x86_64 5.4.6-6.fc41 fedora 285.0 KiB lua-srpm-macros noarch 1-14.fc41 fedora 1.3 KiB lz4-libs x86_64 1.10.0-1.fc41 fedora 145.5 KiB mpfr x86_64 4.2.1-5.fc41 fedora 832.1 KiB ncurses-base noarch 6.5-2.20240629.fc41 fedora 326.3 KiB ncurses-libs x86_64 6.5-2.20240629.fc41 fedora 975.2 KiB ocaml-srpm-macros noarch 10-3.fc41 fedora 1.9 KiB openblas-srpm-macros noarch 2-18.fc41 fedora 112.0 B openldap x86_64 2.6.8-7.fc41 updates 631.4 KiB openssl-libs x86_64 1:3.2.4-1.fc41 updates 7.8 MiB p11-kit x86_64 0.25.5-3.fc41 fedora 2.2 MiB p11-kit-trust x86_64 0.25.5-3.fc41 fedora 391.4 KiB package-notes-srpm-macros noarch 0.5-12.fc41 fedora 1.6 KiB pam x86_64 1.6.1-7.fc41 updates 1.8 MiB pam-libs x86_64 1.6.1-7.fc41 updates 139.0 KiB pcre2 x86_64 10.44-1.fc41.1 fedora 653.5 KiB pcre2-syntax noarch 10.44-1.fc41.1 fedora 251.6 KiB perl-srpm-macros noarch 1-56.fc41 fedora 861.0 B pkgconf x86_64 2.3.0-1.fc41 fedora 88.6 KiB pkgconf-m4 noarch 2.3.0-1.fc41 fedora 14.4 KiB pkgconf-pkg-config x86_64 2.3.0-1.fc41 fedora 989.0 B popt x86_64 1.19-7.fc41 fedora 136.9 KiB publicsuffix-list-dafsa noarch 20250116-1.fc41 updates 68.5 KiB pyproject-srpm-macros noarch 1.17.0-1.fc41 updates 1.9 KiB python-srpm-macros noarch 3.13-3.fc41 fedora 51.0 KiB qt5-srpm-macros noarch 5.15.15-1.fc41 fedora 500.0 B qt6-srpm-macros noarch 6.8.2-1.fc41 updates 456.0 B readline x86_64 8.2-10.fc41 fedora 493.2 KiB rpm x86_64 4.20.0-1.fc41 fedora 3.1 MiB rpm-build-libs x86_64 4.20.0-1.fc41 fedora 206.7 KiB rpm-libs x86_64 4.20.0-1.fc41 fedora 725.9 KiB rpm-sequoia x86_64 1.7.0-5.fc41 updates 2.4 MiB rust-srpm-macros noarch 26.3-3.fc41 fedora 4.8 KiB setup noarch 2.15.0-8.fc41 updates 720.7 KiB sqlite-libs x86_64 3.46.1-2.fc41 updates 1.5 MiB systemd-libs x86_64 256.11-1.fc41 updates 2.0 MiB util-linux-core x86_64 2.40.4-1.fc41 updates 1.5 MiB xxhash-libs x86_64 0.8.3-1.fc41 updates 88.5 KiB xz-libs x86_64 1:5.6.2-2.fc41 fedora 214.4 KiB zig-srpm-macros noarch 1-3.fc41 fedora 1.1 KiB zip x86_64 3.0-41.fc41 fedora 703.2 KiB zlib-ng-compat x86_64 2.2.3-2.fc41 updates 141.9 KiB zstd x86_64 1.5.7-1.fc41 updates 1.7 MiB Installing groups: Buildsystem building group Transaction Summary: Installing: 154 packages Total size of inbound packages is 53 MiB. Need to download 53 MiB. After this operation, 181 MiB extra will be used (install 181 MiB, remove 0 B). [ 1/154] bzip2-0:1.0.8-19.fc41.x86_64 100% | 5.7 MiB/s | 52.5 KiB | 00m00s [ 2/154] cpio-0:2.15-2.fc41.x86_64 100% | 21.9 MiB/s | 291.8 KiB | 00m00s [ 3/154] diffutils-0:3.10-8.fc41.x86_6 100% | 99.0 MiB/s | 405.4 KiB | 00m00s [ 4/154] bash-0:5.2.32-1.fc41.x86_64 100% | 106.3 MiB/s | 1.8 MiB | 00m00s [ 5/154] grep-0:3.11-9.fc41.x86_64 100% | 73.2 MiB/s | 299.7 KiB | 00m00s [ 6/154] findutils-1:4.10.0-4.fc41.x86 100% | 107.1 MiB/s | 548.5 KiB | 00m00s [ 7/154] gzip-0:1.13-2.fc41.x86_64 100% | 41.6 MiB/s | 170.2 KiB | 00m00s [ 8/154] info-0:7.1-3.fc41.x86_64 100% | 59.4 MiB/s | 182.5 KiB | 00m00s [ 9/154] patch-0:2.7.6-25.fc41.x86_64 100% | 42.6 MiB/s | 131.0 KiB | 00m00s [ 10/154] redhat-rpm-config-0:293-1.fc4 100% | 40.1 MiB/s | 82.0 KiB | 00m00s [ 11/154] rpm-build-0:4.20.0-1.fc41.x86 100% | 40.4 MiB/s | 82.8 KiB | 00m00s [ 12/154] sed-0:4.9-3.fc41.x86_64 100% | 103.4 MiB/s | 317.7 KiB | 00m00s [ 13/154] tar-2:1.35-4.fc41.x86_64 100% | 210.1 MiB/s | 860.7 KiB | 00m00s [ 14/154] unzip-0:6.0-64.fc41.x86_64 100% | 60.2 MiB/s | 184.9 KiB | 00m00s [ 15/154] shadow-utils-2:4.15.1-12.fc41 100% | 188.5 MiB/s | 1.3 MiB | 00m00s [ 16/154] which-0:2.21-42.fc41.x86_64 100% | 20.3 MiB/s | 41.6 KiB | 00m00s [ 17/154] xz-1:5.6.2-2.fc41.x86_64 100% | 153.5 MiB/s | 471.5 KiB | 00m00s [ 18/154] fedora-release-common-0:41-29 100% | 11.5 MiB/s | 23.6 KiB | 00m00s [ 19/154] glibc-minimal-langpack-0:2.40 100% | 49.0 MiB/s | 100.4 KiB | 00m00s [ 20/154] coreutils-0:9.5-11.fc41.x86_6 100% | 183.3 MiB/s | 1.1 MiB | 00m00s [ 21/154] gawk-0:5.3.0-4.fc41.x86_64 100% | 178.5 MiB/s | 1.1 MiB | 00m00s [ 22/154] util-linux-0:2.40.4-1.fc41.x8 100% | 155.0 MiB/s | 1.1 MiB | 00m00s [ 23/154] ncurses-libs-0:6.5-2.20240629 100% | 65.2 MiB/s | 334.0 KiB | 00m00s [ 24/154] filesystem-0:3.18-23.fc41.x86 100% | 135.9 MiB/s | 1.1 MiB | 00m00s [ 25/154] bzip2-libs-0:1.0.8-19.fc41.x8 100% | 20.1 MiB/s | 41.1 KiB | 00m00s [ 26/154] libselinux-0:3.7-5.fc41.x86_6 100% | 42.9 MiB/s | 87.8 KiB | 00m00s [ 27/154] libattr-0:2.5.2-4.fc41.x86_64 100% | 17.7 MiB/s | 18.2 KiB | 00m00s [ 28/154] pcre2-0:10.44-1.fc41.1.x86_64 100% | 118.7 MiB/s | 243.1 KiB | 00m00s [ 29/154] ed-0:1.20.2-2.fc41.x86_64 100% | 26.6 MiB/s | 81.8 KiB | 00m00s [ 30/154] ansible-srpm-macros-0:1-16.fc 100% | 20.3 MiB/s | 20.8 KiB | 00m00s [ 31/154] dwz-0:0.15-8.fc41.x86_64 100% | 67.8 MiB/s | 138.9 KiB | 00m00s [ 32/154] fonts-srpm-macros-1:2.0.5-17. 100% | 26.3 MiB/s | 27.0 KiB | 00m00s [ 33/154] file-0:5.45-7.fc41.x86_64 100% | 16.0 MiB/s | 49.1 KiB | 00m00s [ 34/154] fpc-srpm-macros-0:1.3-13.fc41 100% | 7.8 MiB/s | 8.0 KiB | 00m00s [ 35/154] ghc-srpm-macros-0:1.9.1-2.fc4 100% | 8.8 MiB/s | 9.1 KiB | 00m00s [ 36/154] gnat-srpm-macros-0:6-6.fc41.n 100% | 4.4 MiB/s | 9.0 KiB | 00m00s [ 37/154] kernel-srpm-macros-0:1.0-24.f 100% | 4.8 MiB/s | 9.9 KiB | 00m00s [ 38/154] lua-srpm-macros-0:1-14.fc41.n 100% | 4.3 MiB/s | 8.9 KiB | 00m00s [ 39/154] openblas-srpm-macros-0:2-18.f 100% | 3.8 MiB/s | 7.7 KiB | 00m00s [ 40/154] ocaml-srpm-macros-0:10-3.fc41 100% | 4.5 MiB/s | 9.2 KiB | 00m00s [ 41/154] package-notes-srpm-macros-0:0 100% | 3.2 MiB/s | 9.8 KiB | 00m00s [ 42/154] perl-srpm-macros-0:1-56.fc41. 100% | 8.3 MiB/s | 8.5 KiB | 00m00s [ 43/154] python-srpm-macros-0:3.13-3.f 100% | 11.6 MiB/s | 23.7 KiB | 00m00s [ 44/154] qt5-srpm-macros-0:5.15.15-1.f 100% | 8.7 MiB/s | 8.9 KiB | 00m00s [ 45/154] zig-srpm-macros-0:1-3.fc41.no 100% | 7.9 MiB/s | 8.1 KiB | 00m00s [ 46/154] rust-srpm-macros-0:26.3-3.fc4 100% | 5.9 MiB/s | 12.1 KiB | 00m00s [ 47/154] rpm-0:4.20.0-1.fc41.x86_64 100% | 133.6 MiB/s | 547.4 KiB | 00m00s [ 48/154] popt-0:1.19-7.fc41.x86_64 100% | 64.4 MiB/s | 65.9 KiB | 00m00s [ 49/154] readline-0:8.2-10.fc41.x86_64 100% | 104.1 MiB/s | 213.2 KiB | 00m00s [ 50/154] zip-0:3.0-41.fc41.x86_64 100% | 51.7 MiB/s | 264.8 KiB | 00m00s [ 51/154] rpm-build-libs-0:4.20.0-1.fc4 100% | 32.1 MiB/s | 98.6 KiB | 00m00s [ 52/154] libeconf-0:0.6.2-3.fc41.x86_6 100% | 31.4 MiB/s | 32.2 KiB | 00m00s [ 53/154] libacl-0:2.3.2-2.fc41.x86_64 100% | 12.0 MiB/s | 24.5 KiB | 00m00s [ 54/154] rpm-libs-0:4.20.0-1.fc41.x86_ 100% | 75.5 MiB/s | 309.4 KiB | 00m00s [ 55/154] libsemanage-0:3.7-2.fc41.x86_ 100% | 56.8 MiB/s | 116.3 KiB | 00m00s [ 56/154] xz-libs-1:5.6.2-2.fc41.x86_64 100% | 54.6 MiB/s | 111.8 KiB | 00m00s [ 57/154] libcap-0:2.70-4.fc41.x86_64 100% | 42.3 MiB/s | 86.7 KiB | 00m00s [ 58/154] gmp-1:6.3.0-2.fc41.x86_64 100% | 77.6 MiB/s | 318.0 KiB | 00m00s [ 59/154] fedora-repos-0:41-1.noarch 100% | 4.5 MiB/s | 9.2 KiB | 00m00s [ 60/154] mpfr-0:4.2.1-5.fc41.x86_64 100% | 84.5 MiB/s | 346.3 KiB | 00m00s [ 61/154] coreutils-common-0:9.5-11.fc4 100% | 176.9 MiB/s | 2.1 MiB | 00m00s [ 62/154] glibc-common-0:2.40-21.fc41.x 100% | 76.0 MiB/s | 389.2 KiB | 00m00s [ 63/154] util-linux-core-0:2.40.4-1.fc 100% | 120.2 MiB/s | 492.2 KiB | 00m00s [ 64/154] libcap-ng-0:0.8.5-3.fc41.x86_ 100% | 15.9 MiB/s | 32.6 KiB | 00m00s [ 65/154] libutempter-0:1.2.1-15.fc41.x 100% | 13.0 MiB/s | 26.6 KiB | 00m00s [ 66/154] ncurses-base-0:6.5-2.20240629 100% | 86.2 MiB/s | 88.3 KiB | 00m00s [ 67/154] libsepol-0:3.7-2.fc41.x86_64 100% | 111.4 MiB/s | 342.2 KiB | 00m00s [ 68/154] pcre2-syntax-0:10.44-1.fc41.1 100% | 73.2 MiB/s | 149.9 KiB | 00m00s [ 69/154] file-libs-0:5.45-7.fc41.x86_6 100% | 186.0 MiB/s | 762.0 KiB | 00m00s [ 70/154] fedora-gpg-keys-0:41-1.noarch 100% | 65.3 MiB/s | 133.7 KiB | 00m00s [ 71/154] lua-libs-0:5.4.6-6.fc41.x86_6 100% | 32.2 MiB/s | 132.0 KiB | 00m00s [ 72/154] basesystem-0:11-21.fc41.noarc 100% | 7.2 MiB/s | 7.4 KiB | 00m00s [ 73/154] audit-libs-0:4.0.3-1.fc41.x86 100% | 40.6 MiB/s | 124.8 KiB | 00m00s [ 74/154] glibc-gconv-extra-0:2.40-21.f 100% | 185.9 MiB/s | 1.7 MiB | 00m00s [ 75/154] libxcrypt-0:4.4.38-6.fc41.x86 100% | 24.9 MiB/s | 127.6 KiB | 00m00s [ 76/154] glibc-0:2.40-21.fc41.x86_64 100% | 158.5 MiB/s | 2.2 MiB | 00m00s [ 77/154] pam-libs-0:1.6.1-7.fc41.x86_6 100% | 14.0 MiB/s | 57.4 KiB | 00m00s [ 78/154] setup-0:2.15.0-8.fc41.noarch 100% | 50.3 MiB/s | 154.6 KiB | 00m00s [ 79/154] sqlite-libs-0:3.46.1-2.fc41.x 100% | 140.4 MiB/s | 719.0 KiB | 00m00s [ 80/154] libzstd-0:1.5.7-1.fc41.x86_64 100% | 44.0 MiB/s | 315.4 KiB | 00m00s [ 81/154] rpm-sequoia-0:1.7.0-5.fc41.x8 100% | 111.3 MiB/s | 911.4 KiB | 00m00s [ 82/154] zlib-ng-compat-0:2.2.3-2.fc41 100% | 38.5 MiB/s | 78.9 KiB | 00m00s [ 83/154] elfutils-libelf-0:0.192-9.fc4 100% | 101.2 MiB/s | 207.3 KiB | 00m00s [ 84/154] elfutils-libs-0:0.192-9.fc41. 100% | 127.9 MiB/s | 261.8 KiB | 00m00s [ 85/154] elfutils-debuginfod-client-0: 100% | 22.6 MiB/s | 46.3 KiB | 00m00s [ 86/154] elfutils-0:0.192-9.fc41.x86_6 100% | 106.7 MiB/s | 546.1 KiB | 00m00s [ 87/154] json-c-0:0.17-4.fc41.x86_64 100% | 14.3 MiB/s | 44.0 KiB | 00m00s [ 88/154] libgcc-0:14.2.1-7.fc41.x86_64 100% | 65.4 MiB/s | 134.0 KiB | 00m00s [ 89/154] libgomp-0:14.2.1-7.fc41.x86_6 100% | 113.8 MiB/s | 349.5 KiB | 00m00s [ 90/154] jansson-0:2.13.1-10.fc41.x86_ 100% | 21.7 MiB/s | 44.4 KiB | 00m00s [ 91/154] debugedit-0:5.1-4.fc41.x86_64 100% | 36.9 MiB/s | 75.6 KiB | 00m00s [ 92/154] libarchive-0:3.7.4-4.fc41.x86 100% | 99.9 MiB/s | 409.1 KiB | 00m00s [ 93/154] lz4-libs-0:1.10.0-1.fc41.x86_ 100% | 23.0 MiB/s | 70.7 KiB | 00m00s [ 94/154] pkgconf-pkg-config-0:2.3.0-1. 100% | 9.8 MiB/s | 10.0 KiB | 00m00s [ 95/154] zstd-0:1.5.7-1.fc41.x86_64 100% | 118.0 MiB/s | 483.3 KiB | 00m00s [ 96/154] pkgconf-0:2.3.0-1.fc41.x86_64 100% | 14.7 MiB/s | 45.2 KiB | 00m00s [ 97/154] pkgconf-m4-0:2.3.0-1.fc41.noa 100% | 7.0 MiB/s | 14.3 KiB | 00m00s [ 98/154] libpkgconf-0:2.3.0-1.fc41.x86 100% | 18.8 MiB/s | 38.5 KiB | 00m00s [ 99/154] curl-0:8.9.1-3.fc41.x86_64 100% | 101.5 MiB/s | 311.9 KiB | 00m00s [100/154] binutils-0:2.43.1-5.fc41.x86_ 100% | 261.7 MiB/s | 6.3 MiB | 00m00s [101/154] build-reproducibility-srpm-ma 100% | 1.5 MiB/s | 10.8 KiB | 00m00s [102/154] add-determinism-0:0.3.6-3.fc4 100% | 122.2 MiB/s | 875.9 KiB | 00m00s [103/154] efi-srpm-macros-0:5-13.fc41.n 100% | 11.0 MiB/s | 22.5 KiB | 00m00s [104/154] forge-srpm-macros-0:0.4.0-1.f 100% | 9.6 MiB/s | 19.7 KiB | 00m00s [105/154] go-srpm-macros-0:3.6.0-5.fc41 100% | 13.7 MiB/s | 28.0 KiB | 00m00s [106/154] pyproject-srpm-macros-0:1.17. 100% | 6.8 MiB/s | 14.0 KiB | 00m00s [107/154] qt6-srpm-macros-0:6.8.2-1.fc4 100% | 4.5 MiB/s | 9.2 KiB | 00m00s [108/154] libuuid-0:2.40.4-1.fc41.x86_6 100% | 13.2 MiB/s | 27.1 KiB | 00m00s [109/154] libblkid-0:2.40.4-1.fc41.x86_ 100% | 58.2 MiB/s | 119.2 KiB | 00m00s [110/154] libstdc++-0:14.2.1-7.fc41.x86 100% | 200.9 MiB/s | 822.8 KiB | 00m00s [111/154] libsmartcols-0:2.40.4-1.fc41. 100% | 77.5 MiB/s | 79.4 KiB | 00m00s [112/154] libmount-0:2.40.4-1.fc41.x86_ 100% | 72.8 MiB/s | 149.0 KiB | 00m00s [113/154] libfdisk-0:2.40.4-1.fc41.x86_ 100% | 74.5 MiB/s | 152.5 KiB | 00m00s [114/154] systemd-libs-0:256.11-1.fc41. 100% | 168.9 MiB/s | 691.7 KiB | 00m00s [115/154] pam-0:1.6.1-7.fc41.x86_64 100% | 108.4 MiB/s | 555.0 KiB | 00m00s [116/154] authselect-0:1.5.0-8.fc41.x86 100% | 35.6 MiB/s | 145.8 KiB | 00m00s [117/154] gdbm-libs-1:1.23-7.fc41.x86_6 100% | 13.7 MiB/s | 56.3 KiB | 00m00s [118/154] libnsl2-0:2.0.1-2.fc41.x86_64 100% | 14.5 MiB/s | 29.6 KiB | 00m00s [119/154] libpwquality-0:1.4.5-11.fc41. 100% | 58.1 MiB/s | 119.0 KiB | 00m00s [120/154] cracklib-0:2.9.11-6.fc41.x86_ 100% | 45.0 MiB/s | 92.1 KiB | 00m00s [121/154] authselect-libs-0:1.5.0-8.fc4 100% | 71.0 MiB/s | 218.0 KiB | 00m00s [122/154] libtirpc-0:1.3.6-1.rc3.fc41.x 100% | 29.1 MiB/s | 89.4 KiB | 00m00s [123/154] ca-certificates-0:2024.2.69_v 100% | 170.2 MiB/s | 871.2 KiB | 00m00s [124/154] openssl-libs-1:3.2.4-1.fc41.x 100% | 231.1 MiB/s | 2.3 MiB | 00m00s [125/154] libcom_err-0:1.47.1-6.fc41.x8 100% | 8.6 MiB/s | 26.6 KiB | 00m00s [126/154] gdbm-1:1.23-7.fc41.x86_64 100% | 74.1 MiB/s | 151.8 KiB | 00m00s [127/154] crypto-policies-0:20250124-1. 100% | 47.8 MiB/s | 97.8 KiB | 00m00s [128/154] keyutils-libs-0:1.6.3-4.fc41. 100% | 30.9 MiB/s | 31.6 KiB | 00m00s [129/154] krb5-libs-0:1.21.3-4.fc41.x86 100% | 184.8 MiB/s | 757.0 KiB | 00m00s [130/154] libverto-0:0.3.2-9.fc41.x86_6 100% | 6.7 MiB/s | 20.7 KiB | 00m00s [131/154] libxml2-0:2.12.9-1.fc41.x86_6 100% | 128.3 MiB/s | 656.8 KiB | 00m00s [132/154] elfutils-default-yama-scope-0 100% | 6.0 MiB/s | 12.4 KiB | 00m00s [133/154] alternatives-0:1.31-1.fc41.x8 100% | 19.2 MiB/s | 39.4 KiB | 00m00s [134/154] libffi-0:3.4.6-3.fc41.x86_64 100% | 19.5 MiB/s | 39.9 KiB | 00m00s [135/154] p11-kit-trust-0:0.25.5-3.fc41 100% | 64.5 MiB/s | 132.1 KiB | 00m00s [136/154] p11-kit-0:0.25.5-3.fc41.x86_6 100% | 159.8 MiB/s | 490.9 KiB | 00m00s [137/154] libtasn1-0:4.20.0-1.fc41.x86_ 100% | 72.7 MiB/s | 74.4 KiB | 00m00s [138/154] fedora-release-0:41-29.noarch 100% | 12.5 MiB/s | 12.8 KiB | 00m00s [139/154] xxhash-libs-0:0.8.3-1.fc41.x8 100% | 17.5 MiB/s | 35.9 KiB | 00m00s [140/154] fedora-release-identity-basic 100% | 6.6 MiB/s | 13.6 KiB | 00m00s [141/154] libcurl-0:8.9.1-3.fc41.x86_64 100% | 113.9 MiB/s | 350.0 KiB | 00m00s [142/154] libbrotli-0:1.1.0-5.fc41.x86_ 100% | 83.1 MiB/s | 340.5 KiB | 00m00s [143/154] libidn2-0:2.3.7-2.fc41.x86_64 100% | 38.5 MiB/s | 118.4 KiB | 00m00s [144/154] gdb-minimal-0:16.2-1.fc41.x86 100% | 292.4 MiB/s | 4.4 MiB | 00m00s [145/154] libnghttp2-0:1.62.1-2.fc41.x8 100% | 12.5 MiB/s | 76.6 KiB | 00m00s [146/154] libpsl-0:0.21.5-4.fc41.x86_64 100% | 15.6 MiB/s | 64.1 KiB | 00m00s [147/154] libssh-0:0.10.6-8.fc41.x86_64 100% | 68.9 MiB/s | 211.8 KiB | 00m00s [148/154] libssh-config-0:0.10.6-8.fc41 100% | 3.0 MiB/s | 9.2 KiB | 00m00s [149/154] libunistring-0:1.1-8.fc41.x86 100% | 133.0 MiB/s | 544.8 KiB | 00m00s [150/154] publicsuffix-list-dafsa-0:202 100% | 28.7 MiB/s | 58.8 KiB | 00m00s [151/154] openldap-0:2.6.8-7.fc41.x86_6 100% | 79.1 MiB/s | 243.0 KiB | 00m00s [152/154] libevent-0:2.1.12-14.fc41.x86 100% | 125.7 MiB/s | 257.5 KiB | 00m00s [153/154] cyrus-sasl-lib-0:2.1.28-27.fc 100% | 155.2 MiB/s | 794.9 KiB | 00m00s [154/154] libtool-ltdl-0:2.4.7-12.fc41. 100% | 11.6 MiB/s | 35.6 KiB | 00m00s -------------------------------------------------------------------------------- [154/154] Total 100% | 83.5 MiB/s | 52.8 MiB | 00m01s Running transaction Importing OpenPGP key 0xE99D6AD1: UserID : "Fedora (41) " Fingerprint: 466CF2D8B60BC3057AA9453ED0622462E99D6AD1 From : file:///usr/share/distribution-gpg-keys/fedora/RPM-GPG-KEY-fedora-41-primary The key was successfully imported. [ 1/156] Verify package files 100% | 956.0 B/s | 154.0 B | 00m00s [ 2/156] Prepare transaction 100% | 4.4 KiB/s | 154.0 B | 00m00s [ 3/156] Installing libgcc-0:14.2.1-7. 100% | 266.1 MiB/s | 272.5 KiB | 00m00s [ 4/156] Installing publicsuffix-list- 100% | 0.0 B/s | 69.2 KiB | 00m00s [ 5/156] Installing libssh-config-0:0. 100% | 0.0 B/s | 816.0 B | 00m00s [ 6/156] Installing fedora-release-ide 100% | 0.0 B/s | 940.0 B | 00m00s [ 7/156] Installing fedora-gpg-keys-0: 100% | 56.1 MiB/s | 172.2 KiB | 00m00s [ 8/156] Installing fedora-repos-0:41- 100% | 0.0 B/s | 5.7 KiB | 00m00s [ 9/156] Installing fedora-release-com 100% | 23.4 MiB/s | 24.0 KiB | 00m00s [ 10/156] Installing fedora-release-0:4 100% | 0.0 B/s | 124.0 B | 00m00s [ 11/156] Installing setup-0:2.15.0-8.f 100% | 64.5 MiB/s | 726.5 KiB | 00m00s >>> [RPM] /etc/hosts created as /etc/hosts.rpmnew [ 12/156] Installing filesystem-0:3.18- 100% | 3.9 MiB/s | 212.5 KiB | 00m00s [ 13/156] Installing basesystem-0:11-21 100% | 0.0 B/s | 124.0 B | 00m00s [ 14/156] Installing qt6-srpm-macros-0: 100% | 0.0 B/s | 732.0 B | 00m00s [ 15/156] Installing pkgconf-m4-0:2.3.0 100% | 0.0 B/s | 14.8 KiB | 00m00s [ 16/156] Installing pcre2-syntax-0:10. 100% | 248.1 MiB/s | 254.1 KiB | 00m00s [ 17/156] Installing ncurses-base-0:6.5 100% | 85.9 MiB/s | 351.7 KiB | 00m00s [ 18/156] Installing glibc-minimal-lang 100% | 0.0 B/s | 124.0 B | 00m00s [ 19/156] Installing ncurses-libs-0:6.5 100% | 239.7 MiB/s | 981.8 KiB | 00m00s [ 20/156] Installing glibc-0:2.40-21.fc 100% | 318.4 MiB/s | 6.7 MiB | 00m00s [ 21/156] Installing bash-0:5.2.32-1.fc 100% | 453.8 MiB/s | 8.2 MiB | 00m00s [ 22/156] Installing glibc-common-0:2.4 100% | 210.2 MiB/s | 1.1 MiB | 00m00s [ 23/156] Installing glibc-gconv-extra- 100% | 309.3 MiB/s | 8.0 MiB | 00m00s [ 24/156] Installing zlib-ng-compat-0:2 100% | 0.0 B/s | 142.7 KiB | 00m00s [ 25/156] Installing bzip2-libs-0:1.0.8 100% | 0.0 B/s | 81.8 KiB | 00m00s [ 26/156] Installing xz-libs-1:5.6.2-2. 100% | 210.4 MiB/s | 215.5 KiB | 00m00s [ 27/156] Installing popt-0:1.19-7.fc41 100% | 70.1 MiB/s | 143.5 KiB | 00m00s [ 28/156] Installing readline-0:8.2-10. 100% | 241.8 MiB/s | 495.3 KiB | 00m00s [ 29/156] Installing libuuid-0:2.40.4-1 100% | 0.0 B/s | 41.0 KiB | 00m00s [ 30/156] Installing libblkid-0:2.40.4- 100% | 252.1 MiB/s | 258.2 KiB | 00m00s [ 31/156] Installing libattr-0:2.5.2-4. 100% | 0.0 B/s | 29.5 KiB | 00m00s [ 32/156] Installing libacl-0:2.3.2-2.f 100% | 0.0 B/s | 40.7 KiB | 00m00s [ 33/156] Installing gmp-1:6.3.0-2.fc41 100% | 397.3 MiB/s | 813.7 KiB | 00m00s [ 34/156] Installing libxcrypt-0:4.4.38 100% | 284.4 MiB/s | 291.2 KiB | 00m00s [ 35/156] Installing libzstd-0:1.5.7-1. 100% | 393.1 MiB/s | 805.1 KiB | 00m00s [ 36/156] Installing elfutils-libelf-0: 100% | 390.0 MiB/s | 1.2 MiB | 00m00s [ 37/156] Installing libstdc++-0:14.2.1 100% | 449.9 MiB/s | 2.7 MiB | 00m00s [ 38/156] Installing libeconf-0:0.6.2-3 100% | 0.0 B/s | 59.7 KiB | 00m00s [ 39/156] Installing gdbm-libs-1:1.23-7 100% | 120.7 MiB/s | 123.6 KiB | 00m00s [ 40/156] Installing dwz-0:0.15-8.fc41. 100% | 293.3 MiB/s | 300.3 KiB | 00m00s [ 41/156] Installing mpfr-0:4.2.1-5.fc4 100% | 407.1 MiB/s | 833.7 KiB | 00m00s [ 42/156] Installing gawk-0:5.3.0-4.fc4 100% | 346.4 MiB/s | 1.7 MiB | 00m00s [ 43/156] Installing unzip-0:6.0-64.fc4 100% | 381.1 MiB/s | 390.3 KiB | 00m00s [ 44/156] Installing file-libs-0:5.45-7 100% | 764.2 MiB/s | 9.9 MiB | 00m00s [ 45/156] Installing file-0:5.45-7.fc41 100% | 11.4 MiB/s | 105.0 KiB | 00m00s [ 46/156] Installing crypto-policies-0: 100% | 40.0 MiB/s | 163.7 KiB | 00m00s [ 47/156] Installing pcre2-0:10.44-1.fc 100% | 319.8 MiB/s | 654.9 KiB | 00m00s [ 48/156] Installing grep-0:3.11-9.fc41 100% | 250.8 MiB/s | 1.0 MiB | 00m00s [ 49/156] Installing xz-1:5.6.2-2.fc41. 100% | 241.0 MiB/s | 1.2 MiB | 00m00s [ 50/156] Installing libcap-ng-0:0.8.5- 100% | 0.0 B/s | 71.0 KiB | 00m00s [ 51/156] Installing audit-libs-0:4.0.3 100% | 345.1 MiB/s | 353.4 KiB | 00m00s [ 52/156] Installing pam-libs-0:1.6.1-7 100% | 137.9 MiB/s | 141.3 KiB | 00m00s [ 53/156] Installing libcap-0:2.70-4.fc 100% | 110.0 MiB/s | 225.2 KiB | 00m00s [ 54/156] Installing systemd-libs-0:256 100% | 398.0 MiB/s | 2.0 MiB | 00m00s [ 55/156] Installing libsepol-0:3.7-2.f 100% | 399.8 MiB/s | 818.8 KiB | 00m00s [ 56/156] Installing libselinux-0:3.7-5 100% | 178.0 MiB/s | 182.3 KiB | 00m00s [ 57/156] Installing sed-0:4.9-3.fc41.x 100% | 283.1 MiB/s | 869.7 KiB | 00m00s [ 58/156] Installing findutils-1:4.10.0 100% | 371.6 MiB/s | 1.9 MiB | 00m00s [ 59/156] Installing libmount-0:2.40.4- 100% | 341.6 MiB/s | 349.8 KiB | 00m00s [ 60/156] Installing lua-libs-0:5.4.6-6 100% | 279.5 MiB/s | 286.2 KiB | 00m00s [ 61/156] Installing lz4-libs-0:1.10.0- 100% | 143.1 MiB/s | 146.6 KiB | 00m00s [ 62/156] Installing libsmartcols-0:2.4 100% | 173.2 MiB/s | 177.4 KiB | 00m00s [ 63/156] Installing libcom_err-0:1.47. 100% | 0.0 B/s | 68.3 KiB | 00m00s [ 64/156] Installing alternatives-0:1.3 100% | 0.0 B/s | 66.4 KiB | 00m00s [ 65/156] Installing libffi-0:3.4.6-3.f 100% | 0.0 B/s | 87.8 KiB | 00m00s [ 66/156] Installing libtasn1-0:4.20.0- 100% | 177.9 MiB/s | 182.2 KiB | 00m00s [ 67/156] Installing p11-kit-0:0.25.5-3 100% | 315.3 MiB/s | 2.2 MiB | 00m00s [ 68/156] Installing libunistring-0:1.1 100% | 432.7 MiB/s | 1.7 MiB | 00m00s [ 69/156] Installing libidn2-0:2.3.7-2. 100% | 163.6 MiB/s | 335.1 KiB | 00m00s [ 70/156] Installing libpsl-0:0.21.5-4. 100% | 0.0 B/s | 81.7 KiB | 00m00s [ 71/156] Installing p11-kit-trust-0:0. 100% | 42.7 MiB/s | 393.1 KiB | 00m00s [ 72/156] Installing util-linux-core-0: 100% | 245.8 MiB/s | 1.5 MiB | 00m00s [ 73/156] Installing zstd-0:1.5.7-1.fc4 100% | 427.5 MiB/s | 1.7 MiB | 00m00s [ 74/156] Installing tar-2:1.35-4.fc41. 100% | 422.6 MiB/s | 3.0 MiB | 00m00s [ 75/156] Installing libsemanage-0:3.7- 100% | 144.2 MiB/s | 295.2 KiB | 00m00s [ 76/156] Installing shadow-utils-2:4.1 100% | 173.6 MiB/s | 4.2 MiB | 00m00s [ 77/156] Installing libutempter-0:1.2. 100% | 58.3 MiB/s | 59.7 KiB | 00m00s [ 78/156] Installing zip-0:3.0-41.fc41. 100% | 345.2 MiB/s | 707.1 KiB | 00m00s [ 79/156] Installing gdbm-1:1.23-7.fc41 100% | 227.4 MiB/s | 465.8 KiB | 00m00s [ 80/156] Installing cyrus-sasl-lib-0:2 100% | 384.3 MiB/s | 2.3 MiB | 00m00s [ 81/156] Installing libfdisk-0:2.40.4- 100% | 349.0 MiB/s | 357.4 KiB | 00m00s [ 82/156] Installing libxml2-0:2.12.9-1 100% | 421.5 MiB/s | 1.7 MiB | 00m00s [ 83/156] Installing bzip2-0:1.0.8-19.f 100% | 97.8 MiB/s | 100.2 KiB | 00m00s [ 84/156] Installing sqlite-libs-0:3.46 100% | 368.2 MiB/s | 1.5 MiB | 00m00s [ 85/156] Installing add-determinism-0: 100% | 471.2 MiB/s | 2.4 MiB | 00m00s [ 86/156] Installing build-reproducibil 100% | 0.0 B/s | 1.0 KiB | 00m00s [ 87/156] Installing ed-0:1.20.2-2.fc41 100% | 145.7 MiB/s | 149.2 KiB | 00m00s [ 88/156] Installing patch-0:2.7.6-25.f 100% | 261.9 MiB/s | 268.2 KiB | 00m00s [ 89/156] Installing elfutils-default-y 100% | 340.5 KiB/s | 2.0 KiB | 00m00s [ 90/156] Installing elfutils-libs-0:0. 100% | 328.2 MiB/s | 672.1 KiB | 00m00s [ 91/156] Installing cpio-0:2.15-2.fc41 100% | 274.9 MiB/s | 1.1 MiB | 00m00s [ 92/156] Installing diffutils-0:3.10-8 100% | 318.1 MiB/s | 1.6 MiB | 00m00s [ 93/156] Installing json-c-0:0.17-4.fc 100% | 81.7 MiB/s | 83.6 KiB | 00m00s [ 94/156] Installing libgomp-0:14.2.1-7 100% | 503.5 MiB/s | 515.6 KiB | 00m00s [ 95/156] Installing jansson-0:2.13.1-1 100% | 0.0 B/s | 89.7 KiB | 00m00s [ 96/156] Installing libpkgconf-0:2.3.0 100% | 0.0 B/s | 79.3 KiB | 00m00s [ 97/156] Installing pkgconf-0:2.3.0-1. 100% | 89.0 MiB/s | 91.1 KiB | 00m00s [ 98/156] Installing pkgconf-pkg-config 100% | 0.0 B/s | 1.8 KiB | 00m00s [ 99/156] Installing keyutils-libs-0:1. 100% | 0.0 B/s | 55.8 KiB | 00m00s [100/156] Installing libverto-0:0.3.2-9 100% | 0.0 B/s | 31.3 KiB | 00m00s [101/156] Installing xxhash-libs-0:0.8. 100% | 0.0 B/s | 89.9 KiB | 00m00s [102/156] Installing libbrotli-0:1.1.0- 100% | 410.1 MiB/s | 839.9 KiB | 00m00s [103/156] Installing libnghttp2-0:1.62. 100% | 163.2 MiB/s | 167.1 KiB | 00m00s [104/156] Installing libtool-ltdl-0:2.4 100% | 65.7 MiB/s | 67.3 KiB | 00m00s [105/156] Installing coreutils-common-0 100% | 447.6 MiB/s | 11.2 MiB | 00m00s [106/156] Installing openssl-libs-1:3.2 100% | 489.3 MiB/s | 7.8 MiB | 00m00s [107/156] Installing coreutils-0:9.5-11 100% | 285.3 MiB/s | 5.7 MiB | 00m00s [108/156] Installing ca-certificates-0: 100% | 4.3 MiB/s | 2.4 MiB | 00m01s [109/156] Installing krb5-libs-0:1.21.3 100% | 289.9 MiB/s | 2.3 MiB | 00m00s [110/156] Installing libarchive-0:3.7.4 100% | 302.3 MiB/s | 928.6 KiB | 00m00s [111/156] Installing libtirpc-0:1.3.6-1 100% | 194.7 MiB/s | 199.4 KiB | 00m00s [112/156] Installing gzip-0:1.13-2.fc41 100% | 192.7 MiB/s | 394.6 KiB | 00m00s [113/156] Installing authselect-libs-0: 100% | 204.4 MiB/s | 837.2 KiB | 00m00s [114/156] Installing cracklib-0:2.9.11- 100% | 81.5 MiB/s | 250.3 KiB | 00m00s [115/156] Installing libpwquality-0:1.4 100% | 140.0 MiB/s | 430.1 KiB | 00m00s [116/156] Installing libnsl2-0:2.0.1-2. 100% | 57.7 MiB/s | 59.1 KiB | 00m00s [117/156] Installing pam-0:1.6.1-7.fc41 100% | 187.8 MiB/s | 1.9 MiB | 00m00s [118/156] Installing libssh-0:0.10.6-8. 100% | 251.7 MiB/s | 515.4 KiB | 00m00s [119/156] Installing rpm-sequoia-0:1.7. 100% | 403.1 MiB/s | 2.4 MiB | 00m00s [120/156] Installing rpm-libs-0:4.20.0- 100% | 355.2 MiB/s | 727.4 KiB | 00m00s [121/156] Installing rpm-build-libs-0:4 100% | 202.6 MiB/s | 207.5 KiB | 00m00s [122/156] Installing libevent-0:2.1.12- 100% | 292.8 MiB/s | 899.5 KiB | 00m00s [123/156] Installing openldap-0:2.6.8-7 100% | 310.2 MiB/s | 635.2 KiB | 00m00s [124/156] Installing libcurl-0:8.9.1-3. 100% | 395.7 MiB/s | 810.4 KiB | 00m00s [125/156] Installing elfutils-debuginfo 100% | 84.4 MiB/s | 86.5 KiB | 00m00s [126/156] Installing elfutils-0:0.192-9 100% | 382.2 MiB/s | 2.7 MiB | 00m00s [127/156] Installing binutils-0:2.43.1- 100% | 402.8 MiB/s | 27.4 MiB | 00m00s [128/156] Installing gdb-minimal-0:16.2 100% | 442.0 MiB/s | 13.3 MiB | 00m00s [129/156] Installing debugedit-0:5.1-4. 100% | 195.7 MiB/s | 200.4 KiB | 00m00s [130/156] Installing curl-0:8.9.1-3.fc4 100% | 70.7 MiB/s | 796.0 KiB | 00m00s [131/156] Installing rpm-0:4.20.0-1.fc4 100% | 208.8 MiB/s | 2.5 MiB | 00m00s [132/156] Installing lua-srpm-macros-0: 100% | 0.0 B/s | 1.9 KiB | 00m00s [133/156] Installing zig-srpm-macros-0: 100% | 0.0 B/s | 1.7 KiB | 00m00s [134/156] Installing efi-srpm-macros-0: 100% | 0.0 B/s | 41.2 KiB | 00m00s [135/156] Installing rust-srpm-macros-0 100% | 0.0 B/s | 5.6 KiB | 00m00s [136/156] Installing qt5-srpm-macros-0: 100% | 0.0 B/s | 776.0 B | 00m00s [137/156] Installing perl-srpm-macros-0 100% | 0.0 B/s | 1.1 KiB | 00m00s [138/156] Installing package-notes-srpm 100% | 0.0 B/s | 2.0 KiB | 00m00s [139/156] Installing openblas-srpm-macr 100% | 0.0 B/s | 392.0 B | 00m00s [140/156] Installing ocaml-srpm-macros- 100% | 0.0 B/s | 2.2 KiB | 00m00s [141/156] Installing kernel-srpm-macros 100% | 0.0 B/s | 2.3 KiB | 00m00s [142/156] Installing gnat-srpm-macros-0 100% | 0.0 B/s | 1.3 KiB | 00m00s [143/156] Installing ghc-srpm-macros-0: 100% | 0.0 B/s | 1.0 KiB | 00m00s [144/156] Installing fpc-srpm-macros-0: 100% | 0.0 B/s | 420.0 B | 00m00s [145/156] Installing ansible-srpm-macro 100% | 0.0 B/s | 36.2 KiB | 00m00s [146/156] Installing python-srpm-macros 100% | 0.0 B/s | 52.2 KiB | 00m00s [147/156] Installing fonts-srpm-macros- 100% | 0.0 B/s | 57.0 KiB | 00m00s [148/156] Installing forge-srpm-macros- 100% | 0.0 B/s | 40.3 KiB | 00m00s [149/156] Installing go-srpm-macros-0:3 100% | 60.5 MiB/s | 62.0 KiB | 00m00s [150/156] Installing redhat-rpm-config- 100% | 185.6 MiB/s | 190.1 KiB | 00m00s [151/156] Installing rpm-build-0:4.20.0 100% | 99.0 MiB/s | 202.8 KiB | 00m00s [152/156] Installing pyproject-srpm-mac 100% | 2.4 MiB/s | 2.5 KiB | 00m00s [153/156] Installing util-linux-0:2.40. 100% | 176.4 MiB/s | 3.7 MiB | 00m00s [154/156] Installing authselect-0:1.5.0 100% | 79.1 MiB/s | 161.9 KiB | 00m00s [155/156] Installing which-0:2.21-42.fc 100% | 80.5 MiB/s | 82.4 KiB | 00m00s [156/156] Installing info-0:7.1-3.fc41. 100% | 481.6 KiB/s | 362.2 KiB | 00m01s Complete! Finish: installing minimal buildroot with dnf5 Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: INFO: add-determinism-0.3.6-3.fc41.x86_64 alternatives-1.31-1.fc41.x86_64 ansible-srpm-macros-1-16.fc41.noarch audit-libs-4.0.3-1.fc41.x86_64 authselect-1.5.0-8.fc41.x86_64 authselect-libs-1.5.0-8.fc41.x86_64 basesystem-11-21.fc41.noarch bash-5.2.32-1.fc41.x86_64 binutils-2.43.1-5.fc41.x86_64 build-reproducibility-srpm-macros-0.3.6-3.fc41.noarch bzip2-1.0.8-19.fc41.x86_64 bzip2-libs-1.0.8-19.fc41.x86_64 ca-certificates-2024.2.69_v8.0.401-1.0.fc41.noarch coreutils-9.5-11.fc41.x86_64 coreutils-common-9.5-11.fc41.x86_64 cpio-2.15-2.fc41.x86_64 cracklib-2.9.11-6.fc41.x86_64 crypto-policies-20250124-1.git4d262e7.fc41.noarch curl-8.9.1-3.fc41.x86_64 cyrus-sasl-lib-2.1.28-27.fc41.x86_64 debugedit-5.1-4.fc41.x86_64 diffutils-3.10-8.fc41.x86_64 dwz-0.15-8.fc41.x86_64 ed-1.20.2-2.fc41.x86_64 efi-srpm-macros-5-13.fc41.noarch elfutils-0.192-9.fc41.x86_64 elfutils-debuginfod-client-0.192-9.fc41.x86_64 elfutils-default-yama-scope-0.192-9.fc41.noarch elfutils-libelf-0.192-9.fc41.x86_64 elfutils-libs-0.192-9.fc41.x86_64 fedora-gpg-keys-41-1.noarch fedora-release-41-29.noarch fedora-release-common-41-29.noarch fedora-release-identity-basic-41-29.noarch fedora-repos-41-1.noarch file-5.45-7.fc41.x86_64 file-libs-5.45-7.fc41.x86_64 filesystem-3.18-23.fc41.x86_64 findutils-4.10.0-4.fc41.x86_64 fonts-srpm-macros-2.0.5-17.fc41.noarch forge-srpm-macros-0.4.0-1.fc41.noarch fpc-srpm-macros-1.3-13.fc41.noarch gawk-5.3.0-4.fc41.x86_64 gdb-minimal-16.2-1.fc41.x86_64 gdbm-1.23-7.fc41.x86_64 gdbm-libs-1.23-7.fc41.x86_64 ghc-srpm-macros-1.9.1-2.fc41.noarch glibc-2.40-21.fc41.x86_64 glibc-common-2.40-21.fc41.x86_64 glibc-gconv-extra-2.40-21.fc41.x86_64 glibc-minimal-langpack-2.40-21.fc41.x86_64 gmp-6.3.0-2.fc41.x86_64 gnat-srpm-macros-6-6.fc41.noarch go-srpm-macros-3.6.0-5.fc41.noarch gpg-pubkey-e99d6ad1-64d2612c grep-3.11-9.fc41.x86_64 gzip-1.13-2.fc41.x86_64 info-7.1-3.fc41.x86_64 jansson-2.13.1-10.fc41.x86_64 json-c-0.17-4.fc41.x86_64 kernel-srpm-macros-1.0-24.fc41.noarch keyutils-libs-1.6.3-4.fc41.x86_64 krb5-libs-1.21.3-4.fc41.x86_64 libacl-2.3.2-2.fc41.x86_64 libarchive-3.7.4-4.fc41.x86_64 libattr-2.5.2-4.fc41.x86_64 libblkid-2.40.4-1.fc41.x86_64 libbrotli-1.1.0-5.fc41.x86_64 libcap-2.70-4.fc41.x86_64 libcap-ng-0.8.5-3.fc41.x86_64 libcom_err-1.47.1-6.fc41.x86_64 libcurl-8.9.1-3.fc41.x86_64 libeconf-0.6.2-3.fc41.x86_64 libevent-2.1.12-14.fc41.x86_64 libfdisk-2.40.4-1.fc41.x86_64 libffi-3.4.6-3.fc41.x86_64 libgcc-14.2.1-7.fc41.x86_64 libgomp-14.2.1-7.fc41.x86_64 libidn2-2.3.7-2.fc41.x86_64 libmount-2.40.4-1.fc41.x86_64 libnghttp2-1.62.1-2.fc41.x86_64 libnsl2-2.0.1-2.fc41.x86_64 libpkgconf-2.3.0-1.fc41.x86_64 libpsl-0.21.5-4.fc41.x86_64 libpwquality-1.4.5-11.fc41.x86_64 libselinux-3.7-5.fc41.x86_64 libsemanage-3.7-2.fc41.x86_64 libsepol-3.7-2.fc41.x86_64 libsmartcols-2.40.4-1.fc41.x86_64 libssh-0.10.6-8.fc41.x86_64 libssh-config-0.10.6-8.fc41.noarch libstdc++-14.2.1-7.fc41.x86_64 libtasn1-4.20.0-1.fc41.x86_64 libtirpc-1.3.6-1.rc3.fc41.x86_64 libtool-ltdl-2.4.7-12.fc41.x86_64 libunistring-1.1-8.fc41.x86_64 libutempter-1.2.1-15.fc41.x86_64 libuuid-2.40.4-1.fc41.x86_64 libverto-0.3.2-9.fc41.x86_64 libxcrypt-4.4.38-6.fc41.x86_64 libxml2-2.12.9-1.fc41.x86_64 libzstd-1.5.7-1.fc41.x86_64 lua-libs-5.4.6-6.fc41.x86_64 lua-srpm-macros-1-14.fc41.noarch lz4-libs-1.10.0-1.fc41.x86_64 mpfr-4.2.1-5.fc41.x86_64 ncurses-base-6.5-2.20240629.fc41.noarch ncurses-libs-6.5-2.20240629.fc41.x86_64 ocaml-srpm-macros-10-3.fc41.noarch openblas-srpm-macros-2-18.fc41.noarch openldap-2.6.8-7.fc41.x86_64 openssl-libs-3.2.4-1.fc41.x86_64 p11-kit-0.25.5-3.fc41.x86_64 p11-kit-trust-0.25.5-3.fc41.x86_64 package-notes-srpm-macros-0.5-12.fc41.noarch pam-1.6.1-7.fc41.x86_64 pam-libs-1.6.1-7.fc41.x86_64 patch-2.7.6-25.fc41.x86_64 pcre2-10.44-1.fc41.1.x86_64 pcre2-syntax-10.44-1.fc41.1.noarch perl-srpm-macros-1-56.fc41.noarch pkgconf-2.3.0-1.fc41.x86_64 pkgconf-m4-2.3.0-1.fc41.noarch pkgconf-pkg-config-2.3.0-1.fc41.x86_64 popt-1.19-7.fc41.x86_64 publicsuffix-list-dafsa-20250116-1.fc41.noarch pyproject-srpm-macros-1.17.0-1.fc41.noarch python-srpm-macros-3.13-3.fc41.noarch qt5-srpm-macros-5.15.15-1.fc41.noarch qt6-srpm-macros-6.8.2-1.fc41.noarch readline-8.2-10.fc41.x86_64 redhat-rpm-config-293-1.fc41.noarch rpm-4.20.0-1.fc41.x86_64 rpm-build-4.20.0-1.fc41.x86_64 rpm-build-libs-4.20.0-1.fc41.x86_64 rpm-libs-4.20.0-1.fc41.x86_64 rpm-sequoia-1.7.0-5.fc41.x86_64 rust-srpm-macros-26.3-3.fc41.noarch sed-4.9-3.fc41.x86_64 setup-2.15.0-8.fc41.noarch shadow-utils-4.15.1-12.fc41.x86_64 sqlite-libs-3.46.1-2.fc41.x86_64 systemd-libs-256.11-1.fc41.x86_64 tar-1.35-4.fc41.x86_64 unzip-6.0-64.fc41.x86_64 util-linux-2.40.4-1.fc41.x86_64 util-linux-core-2.40.4-1.fc41.x86_64 which-2.21-42.fc41.x86_64 xxhash-libs-0.8.3-1.fc41.x86_64 xz-5.6.2-2.fc41.x86_64 xz-libs-5.6.2-2.fc41.x86_64 zig-srpm-macros-1-3.fc41.noarch zip-3.0-41.fc41.x86_64 zlib-ng-compat-2.2.3-2.fc41.x86_64 zstd-1.5.7-1.fc41.x86_64 Start: buildsrpm Start: rpmbuild -bs Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1740787200 Wrote: /builddir/build/SRPMS/python-tapyoca-0.0.4-1.fc41.src.rpm Finish: rpmbuild -bs INFO: chroot_scan: 1 files copied to /var/lib/copr-rpmbuild/results/chroot_scan INFO: /var/lib/mock/fedora-41-x86_64-1740863228.713984/root/var/log/dnf5.log INFO: chroot_scan: creating tarball /var/lib/copr-rpmbuild/results/chroot_scan.tar.gz /bin/tar: Removing leading `/' from member names Finish: buildsrpm INFO: Done(/var/lib/copr-rpmbuild/workspace/workdir-a8jvyids/python-tapyoca/python-tapyoca.spec) Config(child) 0 minutes 20 seconds INFO: Results and/or logs in: /var/lib/copr-rpmbuild/results INFO: Cleaning up build root ('cleanup_on_success=True') Start: clean chroot INFO: unmounting tmpfs. Finish: clean chroot INFO: Start(/var/lib/copr-rpmbuild/results/python-tapyoca-0.0.4-1.fc41.src.rpm) Config(fedora-41-x86_64) Start(bootstrap): chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-41-x86_64-bootstrap-1740863228.713984/root. INFO: reusing tmpfs at /var/lib/mock/fedora-41-x86_64-bootstrap-1740863228.713984/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled package manager cache Start(bootstrap): cleaning package manager metadata Finish(bootstrap): cleaning package manager metadata Finish(bootstrap): chroot init Start: chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-41-x86_64-1740863228.713984/root. INFO: calling preinit hooks INFO: enabled root cache Start: unpacking root cache Finish: unpacking root cache INFO: enabled package manager cache Start: cleaning package manager metadata Finish: cleaning package manager metadata INFO: enabled HW Info plugin INFO: Buildroot is handled by package management downloaded with a bootstrap image: rpm-4.20.0-1.fc41.x86_64 rpm-sequoia-1.7.0-5.fc41.x86_64 dnf5-5.2.10.0-2.fc41.x86_64 dnf5-plugins-5.2.10.0-2.fc41.x86_64 Finish: chroot init Start: build phase for python-tapyoca-0.0.4-1.fc41.src.rpm Start: build setup for python-tapyoca-0.0.4-1.fc41.src.rpm Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1740787200 Wrote: /builddir/build/SRPMS/python-tapyoca-0.0.4-1.fc41.src.rpm Updating and loading repositories: updates 100% | 507.4 KiB/s | 28.9 KiB | 00m00s fedora 100% | 525.6 KiB/s | 30.5 KiB | 00m00s Copr repository 100% | 77.0 KiB/s | 1.5 KiB | 00m00s Repositories loaded. Package Arch Version Repository Size Installing: python3-devel x86_64 3.13.2-1.fc41 updates 1.8 MiB Installing dependencies: expat x86_64 2.6.4-1.fc41 updates 292.9 KiB libb2 x86_64 0.98.1-12.fc41 fedora 42.2 KiB mpdecimal x86_64 2.5.1-16.fc41 fedora 204.9 KiB pyproject-rpm-macros noarch 1.17.0-1.fc41 updates 114.0 KiB python-pip-wheel noarch 24.2-1.fc41 fedora 1.2 MiB python-rpm-macros noarch 3.13-3.fc41 fedora 22.1 KiB python3 x86_64 3.13.2-1.fc41 updates 31.8 KiB python3-libs x86_64 3.13.2-1.fc41 updates 40.4 MiB python3-packaging noarch 24.2-3.fc41 updates 558.3 KiB python3-rpm-generators noarch 14-11.fc41 fedora 81.7 KiB python3-rpm-macros noarch 3.13-3.fc41 fedora 6.4 KiB tzdata noarch 2025a-1.fc41 updates 1.6 MiB Transaction Summary: Installing: 13 packages Total size of inbound packages is 12 MiB. Need to download 12 MiB. After this operation, 46 MiB extra will be used (install 46 MiB, remove 0 B). [ 1/13] libb2-0:0.98.1-12.fc41.x86_64 100% | 3.1 MiB/s | 25.7 KiB | 00m00s [ 2/13] mpdecimal-0:2.5.1-16.fc41.x86_6 100% | 43.4 MiB/s | 89.0 KiB | 00m00s [ 3/13] python3-devel-0:3.13.2-1.fc41.x 100% | 26.3 MiB/s | 404.1 KiB | 00m00s [ 4/13] expat-0:2.6.4-1.fc41.x86_64 100% | 55.9 MiB/s | 114.5 KiB | 00m00s [ 5/13] python-pip-wheel-0:24.2-1.fc41. 100% | 171.7 MiB/s | 1.2 MiB | 00m00s [ 6/13] tzdata-0:2025a-1.fc41.noarch 100% | 174.1 MiB/s | 713.3 KiB | 00m00s [ 7/13] python3-0:3.13.2-1.fc41.x86_64 100% | 9.3 MiB/s | 28.5 KiB | 00m00s [ 8/13] pyproject-rpm-macros-0:1.17.0-1 100% | 43.7 MiB/s | 44.7 KiB | 00m00s [ 9/13] python-rpm-macros-0:3.13-3.fc41 100% | 8.6 MiB/s | 17.7 KiB | 00m00s [10/13] python3-rpm-generators-0:14-11. 100% | 28.6 MiB/s | 29.3 KiB | 00m00s [11/13] python3-rpm-macros-0:3.13-3.fc4 100% | 6.1 MiB/s | 12.4 KiB | 00m00s [12/13] python3-libs-0:3.13.2-1.fc41.x8 100% | 246.3 MiB/s | 9.1 MiB | 00m00s [13/13] python3-packaging-0:24.2-3.fc41 100% | 15.0 MiB/s | 153.8 KiB | 00m00s -------------------------------------------------------------------------------- [13/13] Total 100% | 111.3 MiB/s | 11.9 MiB | 00m00s Running transaction [ 1/15] Verify package files 100% | 406.0 B/s | 13.0 B | 00m00s [ 2/15] Prepare transaction 100% | 590.0 B/s | 13.0 B | 00m00s [ 3/15] Installing python-rpm-macros-0: 100% | 0.0 B/s | 22.8 KiB | 00m00s [ 4/15] Installing python3-rpm-macros-0 100% | 0.0 B/s | 6.7 KiB | 00m00s [ 5/15] Installing pyproject-rpm-macros 100% | 37.7 MiB/s | 115.9 KiB | 00m00s [ 6/15] Installing tzdata-0:2025a-1.fc4 100% | 67.3 MiB/s | 1.9 MiB | 00m00s [ 7/15] Installing expat-0:2.6.4-1.fc41 100% | 288.1 MiB/s | 295.0 KiB | 00m00s [ 8/15] Installing python-pip-wheel-0:2 100% | 620.8 MiB/s | 1.2 MiB | 00m00s [ 9/15] Installing mpdecimal-0:2.5.1-16 100% | 201.2 MiB/s | 206.0 KiB | 00m00s [10/15] Installing libb2-0:0.98.1-12.fc 100% | 10.6 MiB/s | 43.3 KiB | 00m00s [11/15] Installing python3-libs-0:3.13. 100% | 367.4 MiB/s | 40.8 MiB | 00m00s [12/15] Installing python3-0:3.13.2-1.f 100% | 32.7 MiB/s | 33.5 KiB | 00m00s [13/15] Installing python3-packaging-0: 100% | 278.6 MiB/s | 570.6 KiB | 00m00s [14/15] Installing python3-rpm-generato 100% | 81.0 MiB/s | 82.9 KiB | 00m00s [15/15] Installing python3-devel-0:3.13 100% | 82.6 MiB/s | 1.8 MiB | 00m00s Complete! Finish: build setup for python-tapyoca-0.0.4-1.fc41.src.rpm Start: rpmbuild python-tapyoca-0.0.4-1.fc41.src.rpm Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1740787200 Executing(%mkbuilddir): /bin/sh -e /var/tmp/rpm-tmp.Rvlzow + umask 022 + cd /builddir/build/BUILD/python-tapyoca-0.0.4-build + test -d /builddir/build/BUILD/python-tapyoca-0.0.4-build + /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w /builddir/build/BUILD/python-tapyoca-0.0.4-build + /usr/bin/rm -rf /builddir/build/BUILD/python-tapyoca-0.0.4-build + /usr/bin/mkdir -p /builddir/build/BUILD/python-tapyoca-0.0.4-build + /usr/bin/mkdir -p /builddir/build/BUILD/python-tapyoca-0.0.4-build/SPECPARTS + RPM_EC=0 ++ jobs -p + exit 0 Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.PtPec6 + umask 022 + cd /builddir/build/BUILD/python-tapyoca-0.0.4-build + cd /builddir/build/BUILD/python-tapyoca-0.0.4-build + rm -rf tapyoca-0.0.4 + /usr/lib/rpm/rpmuncompress -x /builddir/build/SOURCES/tapyoca-0.0.4.tar.gz + STATUS=0 + '[' 0 -ne 0 ']' + cd tapyoca-0.0.4 + /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w . + RPM_EC=0 ++ jobs -p + exit 0 Executing(%generate_buildrequires): /bin/sh -e /var/tmp/rpm-tmp.HIzYSo + umask 022 + cd /builddir/build/BUILD/python-tapyoca-0.0.4-build + cd tapyoca-0.0.4 + echo pyproject-rpm-macros + echo python3-devel + echo 'python3dist(packaging)' + echo 'python3dist(pip) >= 19' + '[' -f pyproject.toml ']' + '[' -f setup.py ']' + echo 'python3dist(setuptools) >= 40.8' + rm -rfv '*.dist-info/' + '[' -f /usr/bin/python3 ']' + mkdir -p /builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/.pyproject-builddir + echo -n + CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + CXXFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + FFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + FCFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + VALAFLAGS=-g + RUSTFLAGS='-Copt-level=3 -Cdebuginfo=2 -Ccodegen-units=1 -Cstrip=none -Cforce-frame-pointers=yes --cap-lints=warn' + LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 ' + LT_SYS_LIBRARY_PATH=/usr/lib64: + CC=gcc + CXX=g++ + TMPDIR=/builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/.pyproject-builddir + RPM_TOXENV=py313 + HOSTNAME=rpmbuild + /usr/bin/python3 -Bs /usr/lib/rpm/redhat/pyproject_buildrequires.py --generate-extras --python3_pkgversion 3 --wheeldir /builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/pyproject-wheeldir --output /builddir/build/BUILD/python-tapyoca-0.0.4-build/python-tapyoca-0.0.4-1.fc41.x86_64-pyproject-buildrequires Handling setuptools >= 40.8 from default build backend Requirement not satisfied: setuptools >= 40.8 Exiting dependency generation pass: build backend + cat /builddir/build/BUILD/python-tapyoca-0.0.4-build/python-tapyoca-0.0.4-1.fc41.x86_64-pyproject-buildrequires + rm -rfv '*.dist-info/' + RPM_EC=0 ++ jobs -p + exit 0 Wrote: /builddir/build/SRPMS/python-tapyoca-0.0.4-1.fc41.buildreqs.nosrc.rpm INFO: Going to install missing dynamic buildrequires Updating and loading repositories: fedora 100% | 534.8 KiB/s | 30.5 KiB | 00m00s updates 100% | 160.7 KiB/s | 28.9 KiB | 00m00s Copr repository 100% | 140.0 KiB/s | 1.5 KiB | 00m00s Repositories loaded. Package "pyproject-rpm-macros-1.17.0-1.fc41.noarch" is already installed. Package "python3-devel-3.13.2-1.fc41.x86_64" is already installed. Package "python3-packaging-24.2-3.fc41.noarch" is already installed. Package Arch Version Repository Size Installing: python3-pip noarch 24.2-1.fc41 fedora 11.4 MiB python3-setuptools noarch 69.2.0-8.fc41 fedora 7.2 MiB Transaction Summary: Installing: 2 packages Total size of inbound packages is 4 MiB. Need to download 4 MiB. After this operation, 19 MiB extra will be used (install 19 MiB, remove 0 B). [1/2] python3-setuptools-0:69.2.0-8.fc4 100% | 97.8 MiB/s | 1.6 MiB | 00m00s [2/2] python3-pip-0:24.2-1.fc41.noarch 100% | 142.9 MiB/s | 2.7 MiB | 00m00s -------------------------------------------------------------------------------- [2/2] Total 100% | 21.9 MiB/s | 4.3 MiB | 00m00s Running transaction [1/4] Verify package files 100% | 181.0 B/s | 2.0 B | 00m00s [2/4] Prepare transaction 100% | 133.0 B/s | 2.0 B | 00m00s [3/4] Installing python3-setuptools-0:6 100% | 244.3 MiB/s | 7.3 MiB | 00m00s [4/4] Installing python3-pip-0:24.2-1.f 100% | 220.1 MiB/s | 11.7 MiB | 00m00s Complete! Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1740787200 Executing(%generate_buildrequires): /bin/sh -e /var/tmp/rpm-tmp.0pfKnq + umask 022 + cd /builddir/build/BUILD/python-tapyoca-0.0.4-build + cd tapyoca-0.0.4 + echo pyproject-rpm-macros + echo python3-devel + echo 'python3dist(packaging)' + echo 'python3dist(pip) >= 19' + '[' -f pyproject.toml ']' + '[' -f setup.py ']' + echo 'python3dist(setuptools) >= 40.8' + rm -rfv '*.dist-info/' + '[' -f /usr/bin/python3 ']' + mkdir -p /builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/.pyproject-builddir + echo -n + CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + CXXFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + FFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + FCFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + VALAFLAGS=-g + RUSTFLAGS='-Copt-level=3 -Cdebuginfo=2 -Ccodegen-units=1 -Cstrip=none -Cforce-frame-pointers=yes --cap-lints=warn' + LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 ' + LT_SYS_LIBRARY_PATH=/usr/lib64: + CC=gcc + CXX=g++ + TMPDIR=/builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/.pyproject-builddir + RPM_TOXENV=py313 + HOSTNAME=rpmbuild + /usr/bin/python3 -Bs /usr/lib/rpm/redhat/pyproject_buildrequires.py --generate-extras --python3_pkgversion 3 --wheeldir /builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/pyproject-wheeldir --output /builddir/build/BUILD/python-tapyoca-0.0.4-build/python-tapyoca-0.0.4-1.fc41.x86_64-pyproject-buildrequires Handling setuptools >= 40.8 from default build backend Requirement satisfied: setuptools >= 40.8 (installed: setuptools 69.2.0) !!!! containing_folder_name=tapyoca-0.0.4 but setup name is tapyoca Setup params ------------------------------------------------------- { "name": "tapyoca", "version": "0.0.4", "url": "https://github.com/thorwhalen/tapyoca", "packages": [ "tapyoca", "tapyoca.agglutination", "tapyoca.covid", "tapyoca.darpa", "tapyoca.demonyms", "tapyoca.indexing_podcasts", "tapyoca.parquet_deformations", "tapyoca.phoneming" ], "include_package_data": true, "platforms": "any", "long_description": "# tapyoca\nA medley of small projects\n\n\n# parquet_deformations\n\nI'm calling these [Parquet deformations](https://www.theguardian.com/artanddesign/alexs-adventures-in-numberland/2014/sep/09/crazy-paving-the-twisted-world-of-parquet-deformations#:~:text=In%20the%201960s%20an%20American,the%20regularity%20of%20the%20tiling.) but purest would lynch me. \n\nReally, I just wanted to transform one word into another word, gradually, as I've seen in some of [Escher's](https://en.wikipedia.org/wiki/M._C._Escher) work, so I looked it up, and saw that it's called parquet deformations. The math looked enticing, but I had no time for that, so I did the first way I could think of: Mapping pixels to pixels (in some fashion -- but nearest neighbors is the method that yields nicest results, under the pixel-level restriction). \n\nOf course, this can be applied to any image (that will be transformed to B/W (not even gray -- I mean actual B/W), and there's several ways you can perform the parquet (I like the gif rendering). \n\nThe main function (exposed as a script) is `mk_deformation_image`. All you need is to specify two images (or words). If you want, of course, you can specify:\n- `n_steps`: Number of steps from start to end image\n- `save_to_file`: path to file to save too (if not given, will just return the image object)\n- `kind`: 'gif', 'horizontal_stack', or 'vertical_stack'\n- `coordinate_mapping_maker`: A function that will return the mapping between start and end. \nThis function should return a pair (`from_coord`, `to_coord`) of aligned matrices whose 2 columns are the the \n`(x, y)` coordinates, and the rows represent aligned positions that should be mapped. \n\n\n\n## Examples\n\n### Two words...\n\n\n```python\nfit_to_size = 400\nstart_im = image_of_text('sensor').rotate(90, expand=1)\nend_im = image_of_text('meaning').rotate(90, expand=1)\nstart_and_end_image(start_im, end_im)\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_5_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 15, kind='h').resize((500,200))\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_6_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im.transpose(4), end_im.transpose(4), 5, kind='v').resize((200,200))\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_7_0.png)\n\n\n\n\n```python\nf = 'sensor_meaning_knn.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nf = 'sensor_meaning_scan.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f, \n coordinate_mapping_maker='scan')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nf = 'sensor_meaning_random.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f, \n coordinate_mapping_maker='random')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n### From a list of words\n\n\n```python\nstart_words = ['sensor', 'vibration', 'tempature']\nend_words = ['sense', 'meaning', 'detection']\nstart_im, end_im = make_start_and_end_images_with_words(\n start_words, end_words, perm=True, repeat=2, size=150)\nstart_and_end_image(start_im, end_im).resize((600, 200))\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_12_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 5)\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_13_0.png)\n\n\n\n\n```python\nf = 'bunch_of_words.gif'\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n## From files\n\n\n```python\nstart_im = Image.open('sensor_strip_01.png')\nend_im = Image.open('sense_strip_01.png')\nstart_and_end_image(start_im.resize((200, 500)), end_im.resize((200, 500)))\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_16_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 7)\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_17_0.png)\n\n\n\n\n```python\nf = 'medley.gif'\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f, coordinate_mapping_maker='scan')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n## an image and some text\n\n\n```python\nstart_im = 'img/waveform_01.png' # will first look for a file, and if not consider as text\nend_im = 'makes sense'\n\nmk_gif_of_deformations(start_im, end_im, n_steps=20, \n save_to_file='image_and_text.gif')\ndisplay_gif('image_and_text.gif') \n```\n\n\n\n\n\n\n\n\n\n\n\n# demonys\n\n## What do we think about other peoples?\n\nThis project is meant to get an idea of what people think of people for different nations, as seen by what they ask google about them. \n\nHere I use python code to acquire, clean up, and analyze the data. \n\n### Demonym\n\nIf you're like me and enjoy the false and fleeting impression of superiority that comes when you know a word someone else doesn't. If you're like me and go to parties for the sole purpose of seeking victims to get a one-up on, here's a cool word to add to your arsenal:\n\n**demonym**: a noun used to denote the natives or inhabitants of a particular country, state, city, etc.\n_\"he struggled for the correct demonym for the people of Manchester\"_\n\n### Back-story of this analysis\n \nDuring a discussion (about traveling in Europe) someone said \"why are the swiss so miserable\". Now, I wouldn't say that the swiss were especially miserable (a couple of ex-girlfriends aside), but to be fair he was contrasting with Italians, so perhaps he has a point. I apologize if you are swiss, or one of the two ex-girlfriends -- nothing personal, this is all for effect. \n\nWe googled \"why are the swiss so \", and sure enough, \"why are the swiss so miserable\" came up as one of the suggestions. So we got curious and started googling other peoples: the French, the Germans, etc.\n\nThat's the back-story of this analysis. This analysis is meant to get an idea of what we think of peoples from other countries. Of course, one can rightfully critique the approach I'll take to gauge \"what we think\" -- all three of these words should, but will not, be defined. I'm just going to see what google's *current* auto-suggest comes back with when I enter \"why are the X so \" (where X will be a noun that denotes the natives of inhabitants of a particular country; a *demonym* if you will). \n\n### Warning\n\nAgain, word of warning: All data and analyses are biased. \nTake everything you'll read here (and to be fair, what you read anywhere) with a grain of salt. \nFor simplicitly I'll saying things like \"what we think of...\" or \"who do we most...\", etc.\nBut I don't **really** mean that.\n\n### Resources\n\n* http://www.geography-site.co.uk/pages/countries/demonyms.html for my list of demonyms.\n* google for my suggestion engine, using the url prefix: `http://suggestqueries.google.com/complete/search?client=chrome&q=`\n\n\n## The results\n\n### In a nutshell\n\nBelow is listed 73 demonyms along with words extracted from the very first google suggestion when you type. \n\n`why are the DEMONYM so `\n\n```text\nafghan \t eyes beautiful\nalbanian \t beautiful\namerican \t girl dolls expensive\naustralian\t tall\nbelgian \t fries good\nbhutanese \t happy\nbrazilian \t good at football\nbritish \t full of grief and despair\nbulgarian \t properties cheap\nburmese \t cats affectionate\ncambodian \t cows skinny\ncanadian \t nice\nchinese \t healthy\ncolombian \t avocados big\ncuban \t cigars good\nczech \t tall\ndominican \t republic and haiti different\negyptian \t gods important\nenglish \t reserved\neritrean \t beautiful\nethiopian \t beautiful\nfilipino \t proud\nfinn \t shoes expensive\nfrench \t healthy\ngerman \t tall\ngreek \t gods messed up\nhaitian \t parents strict\nhungarian \t words long\nindian \t tv debates chaotic\nindonesian\t smart\niranian \t beautiful\nisraeli \t startups successful\nitalian \t short\njamaican \t sprinters fast\njapanese \t polite\nkenyan \t runners good\nlebanese \t rich\nmalagasy \t names long\nmalaysian \t drivers bad\nmaltese \t rude\nmongolian \t horses small\nmoroccan \t rugs expensive\nnepalese \t beautiful\nnigerian \t tall\nnorth korean\t hats big\nnorwegian \t flights cheap\npakistani \t fair\nperuvian \t blueberries big\npole \t vaulters hot\nportuguese\t short\npuerto rican\t and cuban flags similar\nromanian \t beautiful\nrussian \t good at math\nsamoan \t big\nsaudi \t arrogant\nscottish \t bitter\nsenegalese\t tall\nserbian \t tall\nsingaporean\t rude\nsomali \t parents strict\nsouth african\t plugs big\nsouth korean\t tall\nsri lankan\t dark\nsudanese \t tall\nswiss \t good at making watches\nsyrian \t families large\ntaiwanese \t pretty\nthai \t pretty\ntongan \t big\nukrainian \t beautiful\nvietnamese\t fiercely nationalistic\nwelsh \t dark\nzambian \t emeralds cheap\n```\n\n\nNotes:\n* The queries actually have a space after the \"so\", which matters so as to omit suggestions containing words that start with so.\n* Only the tail of the suggestion is shown -- minus prefix (`why are the DEMONYM` or `why are DEMONYM`) as well as the `so`, where ever it lands in the suggestion. \nFor example, the first suggestion for the american demonym was \"why are american dolls so expensive\", which results in the \"dolls expensive\" association. \n\n\n### Who do we most talk/ask about?\n\nThe original list contained 217 demonyms, but many of these yielded no suggestions (to the specific query format I used, that is). \nOnly 73 demonyms gave me at least one suggestion. \nBut within those, number of suggestions range between 1 and 20 (which is probably the default maximum number of suggestions for the API I used). \nSo, pretending that the number of suggestions is an indicator of how much we have to say, or how many different opinions we have, of each of the covered nationalities, \nhere's the top 15 demonyms people talk about, with the corresponding number of suggestions \n(proxy for \"the number of different things people ask about the said nationality). \n\n```text\nfrench 20\nsingaporean 20\ngerman 20\nbritish 20\nswiss 20\nenglish 19\nitalian 18\ncuban 18\ncanadian 18\nwelsh 18\naustralian 17\nmaltese 16\namerican 16\njapanese 14\nscottish 14\n```\n\n### Who do we least talk/ask about?\n\nConversely, here are the 19 demonyms that came back with only one suggestion.\n\n```text\nsomali 1\nbhutanese 1\nsyrian 1\ntongan 1\ncambodian 1\nmalagasy 1\nsaudi 1\nserbian 1\nczech 1\neritrean 1\nfinn 1\npuerto rican 1\npole 1\nhaitian 1\nhungarian 1\nperuvian 1\nmoroccan 1\nmongolian 1\nzambian 1\n```\n\n### What do we think about people?\n\nWhy are the French so...\n\nHow would you (if you're (un)lucky enough to know the French) finish this sentence?\nYou might even have several opinions about the French, and any other group of people you've rubbed shoulders with.\nWhat words would your palette contain to describe different nationalities?\nWhat words would others (at least those that ask questions to google) use?\n\nWell, here's what my auto-suggest search gave me. A set of 357 unique words and expressions to describe the 72 nationalities. \nSo a long tail of words use only for one nationality. But some words occur for more than one nationality. \nHere are the top 12 words/expressions used to describe people of the world. \n\n```text\nbeautiful 11\ntall 11\nshort 9\nnames long 8\nproud 8\nparents strict 8\nsmart 8\nnice 7\nboring 6\nrich 5\ndark 5\nsuccessful 5\n```\n\n### Who is beautiful? Who is tall? Who is short? Who is smart?\n\n```text\nbeautiful : albanian, eritrean, ethiopian, filipino, iranian, lebanese, nepalese, pakistani, romanian, ukrainian, vietnamese\ntall : australian, czech, german, nigerian, pakistani, samoan, senegalese, serbian, south korean, sudanese, taiwanese\nshort : filipino, indonesian, italian, maltese, nepalese, pakistani, portuguese, singaporean, welsh\nnames long : indian, malagasy, nigerian, portuguese, russian, sri lankan, thai, welsh\nproud : albanian, ethiopian, filipino, iranian, lebanese, portuguese, scottish, welsh\nparents strict : albanian, ethiopian, haitian, indian, lebanese, pakistani, somali, sri lankan\nsmart : indonesian, iranian, lebanese, pakistani, romanian, singaporean, taiwanese, vietnamese\nnice : canadian, english, filipino, nepalese, portuguese, taiwanese, thai\nboring : british, english, french, german, singaporean, swiss\nrich : lebanese, pakistani, singaporean, taiwanese, vietnamese\ndark : filipino, senegalese, sri lankan, vietnamese, welsh\nsuccessful : chinese, english, japanese, lebanese, swiss\n```\n\n## How did I do it?\n\nI scraped a list of (country, demonym) pairs from a table in http://www.geography-site.co.uk/pages/countries/demonyms.html.\n\nThen I diagnosed these and manually made a mapping to simplify some \"complex\" entries, \nsuch as mapping an entry such as \"Irishman or Irishwoman or Irish\" to \"Irish\".\n\nUsing the google suggest API (http://suggestqueries.google.com/complete/search?client=chrome&q=), I requested what the suggestions \nfor `why are the $demonym so ` query pattern, for `$demonym` running through all 217 demonyms from the list above, \nstoring the results for each if the results were non-empty. \n\nThen, it was just a matter of pulling this data into memory, formatting it a bit, and creating a pandas dataframe that I could then interrogate.\n \n## Resources you can find here\n\nThe code to do this analysis yourself, from scratch here: `data_acquisition.py`.\n\nThe jupyter notebook I actually used when I developed this: `01 - Demonyms and adjectives - why are the french so....ipynb`\n \nNote you'll need to pip install py2store if you haven't already.\n\nIn the `data` folder you'll find\n* country_demonym.p: A pickle of a dataframe of countries and corresponding demonyms\n* country_demonym.xlsx: The same as above, but in excel form\n* demonym_suggested_characteristics.p: A pickle of 73 demonyms and auto-suggestion information, including characteristics. \n* what_we_think_about_demonyns.xlsx: An excel containing various statistics about demonyms and their (perceived) characteristics\n \n\n\n\n\n\n# Agglutinations\n\nInspired from a [tweet](https://twitter.com/raymondh/status/1311003482531401729) from Raymond Hettinger this morning:\n\n_Resist the urge to elide the underscore in multiword function or method names_\n\nSo I wondered...\n\n## Gluglus\n\nThe gluglu of a word is the number of partitions you can make of that word into words (of length at least 2 (so no using a or i)).\n(No \"gluglu\" isn't an actual term -- unless everyone starts using it from now on. \nBut it was inspired from an actual [linguistic term](https://en.wikipedia.org/wiki/Agglutination).)\n\nFor example, the gluglu of ``newspaper`` is 4:\n\n```\nnewspaper\n new spa per\n news pa per\n news paper\n```\n\nEvery (valid) word has gluglu at least 1.\n\n\n## How many standard library names have gluglus at last 2?\n\n108\n\nHere's [the list](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/standard_lib_gluglus.txt) of all of them.\n\nThe winner has a gluglu of 6 (not 7 because formatannotationrelativeto isn't in the dictionary)\n\n```\nformatannotationrelativeto\n\tfor mat an not at ion relative to\n\tfor mat annotation relative to\n\tform at an not at ion relative to\n\tform at annotation relative to\n\tformat an not at ion relative to\n\tformat annotation relative to\n```\n\n## Details\n\n### Dictionary\n\nReally it depends on what dictionary we use. \nHere, I used a very conservative one. \nThe intersection of two lists: The [corncob](http://www.mieliestronk.com/corncob_lowercase.txt) \nand the [google10000](https://raw.githubusercontent.com/first20hours/google-10000-english/master/google-10000-english-usa.txt) word lists.\nAdditionally, I only kept of those, those that had at least 2 letters, and had only letters (no hyphens or disturbing diacritics).\n\nDiacritics. Look it up. Impress your next nerd date.\n\nIm left with 8116 words. You can find them [here](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/words_8116.csv).\n\n### Standard Lib Names\n\nSurprisingly, that was the hardest part. I know I'm missing some, but that's enough rabbit-holing. \n\nWhat I did (modulo some exceptions I won't look into) was to walk the standard lib modules (even that list wasn't a given!) \nextracting (recursively( the names of any (non-underscored) attributes if they were modules or callables, \nas well as extracting the arguments of these callables (when they had signatures).\n\nYou can find the code I used to extract these names [here](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/py_names.py) \nand the actual list [there](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/standard_lib_module_names.csv).\n\n\n\n# covid\n\n## Bar Chart Races (applied to covid-19 spread)\n\nThe module will show is how to make these:\n- Confirmed cases (by country): https://public.flourish.studio/visualisation/1704821/\n- Deaths (by country): https://public.flourish.studio/visualisation/1705644/\n- US Confirmed cases (by state): https://public.flourish.studio/visualisation/1794768/\n- US Deaths (by state): https://public.flourish.studio/visualisation/1794797/\n\n### The script\n\nIf you just want to run this as a script to get the job done, you have one here: \nhttps://raw.githubusercontent.com/thorwhalen/tapyoca/master/covid/covid_bar_chart_race.py\n\nRun like this\n```\n$ python covid_bar_chart_race.py -h\nusage: covid_bar_chart_race.py [-h] {mk-and-save-covid-data,update-covid-data,instructions-to-make-bar-chart-race} ...\n\npositional arguments:\n {mk-and-save-covid-data,update-covid-data,instructions-to-make-bar-chart-race}\n mk-and-save-covid-data\n :param data_sources: Dirpath or py2store Store where the data is :param kinds: The kinds of data you want to compute and save :param\n skip_first_days: :param verbose: :return:\n update-covid-data update the coronavirus data\n instructions-to-make-bar-chart-race\n\noptional arguments:\n -h, --help show this help message and exit\n ```\n \n \n### The jupyter notebook\n\nThe notebook (the .ipynb file) shows you how to do it step by step in case you want to reuse the methods for other stuff.\n\n\n\n## Getting and preparing the data\n\nCorona virus data here: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset (direct download: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset/download). It's currently updated daily, so download a fresh copy if you want.\n\nPopulation data here: http://api.worldbank.org/v2/en/indicator/SP.POP.TOTL?downloadformat=csv\n\nIt comes under the form of a zip file (currently named `novel-corona-virus-2019-dataset.zip` with several `.csv` files in them. We use `py2store` (To install: `pip install py2store`. Project lives here: https://github.com/i2mint/py2store) to access and pre-prepare it. It allows us to not have to unzip the file and replace the older folder with it every time we download a new one. It also gives us the csvs as `pandas.DataFrame` already. \n\n\n```python\nimport pandas as pd\nfrom io import BytesIO\nfrom py2store import kv_wrap, ZipReader # google it and pip install it\nfrom py2store.caching import mk_cached_store\nfrom py2store import QuickPickleStore\nfrom py2store.sources import FuncReader\n\ndef country_flag_image_url():\n import pandas as pd\n return pd.read_csv(\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv')\n\ndef kaggle_coronavirus_dataset():\n import kaggle\n from io import BytesIO\n # didn't find the pure binary download function, so using temp dir to emulate\n from tempfile import mkdtemp \n download_dir = mkdtemp()\n filename = 'novel-corona-virus-2019-dataset.zip'\n zip_file = os.path.join(download_dir, filename)\n \n dataset = 'sudalairajkumar/novel-corona-virus-2019-dataset'\n kaggle.api.dataset_download_files(dataset, download_dir)\n with open(zip_file, 'rb') as fp:\n b = fp.read()\n return BytesIO(b)\n\ndef city_population_in_time():\n import pandas as pd\n return pd.read_csv(\n 'https://gist.githubusercontent.com/johnburnmurdoch/'\n '4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'\n )\n\ndef country_flag_image_url_prep(df: pd.DataFrame):\n # delete the region col (we don't need it)\n del df['region']\n # rewriting a few (not all) of the country names to match those found in kaggle covid data\n # Note: The list is not complete! Add to it as needed\n old_and_new = [('USA', 'US'), \n ('Iran, Islamic Rep.', 'Iran'), \n ('UK', 'United Kingdom'), \n ('Korea, Rep.', 'Korea, South')]\n for old, new in old_and_new:\n df['country'] = df['country'].replace(old, new)\n\n return df\n\n\n@kv_wrap.outcoming_vals(lambda x: pd.read_csv(BytesIO(x))) # this is to format the data as a dataframe\nclass ZippedCsvs(ZipReader):\n pass\n# equivalent to ZippedCsvs = kv_wrap.outcoming_vals(lambda x: pd.read_csv(BytesIO(x)))(ZipReader)\n```\n\n\n```python\n# Enter here the place you want to cache your data\nmy_local_cache = os.path.expanduser('~/ddir/my_sources')\n```\n\n\n```python\nCachedFuncReader = mk_cached_store(FuncReader, QuickPickleStore(my_local_cache))\n```\n\n\n```python\ndata_sources = CachedFuncReader([country_flag_image_url, \n kaggle_coronavirus_dataset, \n city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ncovid_datasets = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\nlist(covid_datasets)\n```\n\n\n\n\n ['COVID19_line_list_data.csv',\n 'COVID19_open_line_list.csv',\n 'covid_19_data.csv',\n 'time_series_covid_19_confirmed.csv',\n 'time_series_covid_19_confirmed_US.csv',\n 'time_series_covid_19_deaths.csv',\n 'time_series_covid_19_deaths_US.csv',\n 'time_series_covid_19_recovered.csv']\n\n\n\n\n```python\ncovid_datasets['time_series_covid_19_confirmed.csv'].head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Province/StateCountry/RegionLatLong1/22/201/23/201/24/201/25/201/26/201/27/20...3/24/203/25/203/26/203/27/203/28/203/29/203/30/203/31/204/1/204/2/20
0NaNAfghanistan33.000065.0000000000...748494110110120170174237273
1NaNAlbania41.153320.1683000000...123146174186197212223243259277
2NaNAlgeria28.03391.6596000000...264302367409454511584716847986
3NaNAndorra42.50631.5218000000...164188224267308334370376390428
4NaNAngola-11.202717.8739000000...3344577788
\n

5 rows \u00d7 76 columns

\n
\n\n\n\n\n```python\ncountry_flag_image_url = data_sources['country_flag_image_url']\ncountry_flag_image_url.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
\n
\n\n\n\n\n```python\nfrom IPython.display import Image\nflag_image_url_of_country = country_flag_image_url.set_index('country')['flag_image_url']\nImage(url=flag_image_url_of_country['Australia'])\n```\n\n\n\n\n\n\n\n\n### Update coronavirus data\n\n\n```python\n# To update the coronavirus data:\ndef update_covid_data(data_sources):\n \"\"\"update the coronavirus data\"\"\"\n if 'kaggle_coronavirus_dataset' in data_sources._caching_store:\n del data_sources._caching_store['kaggle_coronavirus_dataset'] # delete the cached item\n _ = data_sources['kaggle_coronavirus_dataset']\n\n# update_covid_data(data_sources) # uncomment here when you want to update\n```\n\n### Prepare data for flourish upload\n\n\n```python\nimport re\n\ndef print_if_verbose(verbose, *args, **kwargs):\n if verbose:\n print(*args, **kwargs)\n \ndef country_data_for_data_kind(data_sources, kind='confirmed', skip_first_days=0, verbose=False):\n \"\"\"kind can be 'confirmed', 'deaths', 'confirmed_US', 'confirmed_US', 'recovered'\"\"\"\n \n covid_datasets = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\n \n df = covid_datasets[f'time_series_covid_19_{kind}.csv']\n # df = s['time_series_covid_19_deaths.csv']\n if 'Province/State' in df.columns:\n df.loc[df['Province/State'].isna(), 'Province/State'] = 'n/a' # to avoid problems arising from NaNs\n\n print_if_verbose(verbose, f\"Before data shape: {df.shape}\")\n\n # drop some columns we don't need\n p = re.compile('\\d+/\\d+/\\d+')\n\n assert all(isinstance(x, str) for x in df.columns)\n date_cols = [x for x in df.columns if p.match(x)]\n if not kind.endswith('US'):\n df = df.loc[:, ['Country/Region'] + date_cols]\n # group countries and sum up the contributions of their states/regions/pargs\n df['country'] = df.pop('Country/Region')\n df = df.groupby('country').sum()\n else:\n df = df.loc[:, ['Province_State'] + date_cols]\n df['state'] = df.pop('Province_State')\n df = df.groupby('state').sum()\n\n \n print_if_verbose(verbose, f\"After data shape: {df.shape}\")\n df = df.iloc[:, skip_first_days:]\n \n if not kind.endswith('US'):\n # Joining with the country image urls and saving as an xls\n country_image_url = country_flag_image_url_prep(data_sources['country_flag_image_url'])\n t = df.copy()\n t.columns = [str(x)[:10] for x in t.columns]\n t = t.reset_index(drop=False)\n t = country_image_url.merge(t, how='outer')\n t = t.set_index('country')\n df = t\n else: \n pass\n\n return df\n\n\ndef mk_and_save_country_data_for_data_kind(data_sources, kind='confirmed', skip_first_days=0, verbose=False):\n t = country_data_for_data_kind(data_sources, kind, skip_first_days, verbose)\n filepath = f'country_covid_{kind}.xlsx'\n t.to_excel(filepath)\n print_if_verbose(verbose, f\"Was saved here: {filepath}\")\n\n```\n\n\n```python\n# for kind in ['confirmed', 'deaths', 'recovered', 'confirmed_US', 'deaths_US']:\nfor kind in ['confirmed', 'deaths', 'recovered', 'confirmed_US', 'deaths_US']:\n mk_and_save_country_data_for_data_kind(data_sources, kind=kind, skip_first_days=39, verbose=True)\n```\n\n Before data shape: (262, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_confirmed.xlsx\n Before data shape: (262, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_deaths.xlsx\n Before data shape: (248, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_recovered.xlsx\n Before data shape: (3253, 86)\n After data shape: (58, 75)\n Was saved here: country_covid_confirmed_US.xlsx\n Before data shape: (3253, 87)\n After data shape: (58, 75)\n Was saved here: country_covid_deaths_US.xlsx\n\n\n### Upload to Flourish, tune, and publish\n\nGo to https://public.flourish.studio/, get a free account, and play.\n\nGot to https://app.flourish.studio/templates\n\nChoose \"Bar chart race\". At the time of writing this, it was here: https://app.flourish.studio/visualisation/1706060/\n\n... and then play with the settings\n\n\n## Discussion of the methods\n\n\n```python\nfrom py2store import *\nfrom IPython.display import Image\n```\n\n### country flags images\n\nThe manual data prep looks something like this.\n\n\n```python\nimport pandas as pd\n\n# get the csv data from the url\ncountry_image_url_source = \\\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv'\ncountry_image_url = pd.read_csv(country_image_url_source)\n\n# delete the region col (we don't need it)\ndel country_image_url['region']\n\n# rewriting a few (not all) of the country names to match those found in kaggle covid data\n# Note: The list is not complete! Add to it as needed\n# TODO: (Wishful) Using a general smart soft-matching algorithm to do this automatically.\n# TODO: This could use edit-distance, synonyms, acronym generation, etc.\nold_and_new = [('USA', 'US'), \n ('Iran, Islamic Rep.', 'Iran'), \n ('UK', 'United Kingdom'), \n ('Korea, Rep.', 'Korea, South')]\nfor old, new in old_and_new:\n country_image_url['country'] = country_image_url['country'].replace(old, new)\n\nimage_url_of_country = country_image_url.set_index('country')['flag_image_url']\n\ncountry_image_url.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryflag_image_url
0Angolahttps://www.countryflags.io/ao/flat/64.png
1Burundihttps://www.countryflags.io/bi/flat/64.png
2Beninhttps://www.countryflags.io/bj/flat/64.png
3Burkina Fasohttps://www.countryflags.io/bf/flat/64.png
4Botswanahttps://www.countryflags.io/bw/flat/64.png
\n
\n\n\n\n\n```python\nImage(url=image_url_of_country['Australia'])\n```\n\n\n\n\n\n\n\n\n### Caching the flag images data\n\nDownloading our data sources every time we need them is not sustainable. What if they're big? What if you're offline or have slow internet (yes, dear future reader, even in the US, during coronavirus times!)?\n\nCaching. A \"cache aside\" read-cache. That's the word. py2store has tools for that (most of which are are caching.py). \n\nSo let's say we're going to have a local folder where we'll store various datas we download. The principle is as follows:\n\n\n```python\nfrom py2store.caching import mk_cached_store\n\nclass TheSource(dict): ...\nthe_cache = {}\nTheCacheSource = mk_cached_store(TheSource, the_cache)\n\nthe_source = TheSource({'green': 'eggs', 'and': 'ham'})\n\nthe_cached_source = TheCacheSource(the_source)\nprint(f\"the_cache: {the_cache}\")\nprint(f\"Getting green...\")\nthe_cached_source['green']\nprint(f\"the_cache: {the_cache}\")\nprint(\"... so the next time the_cached_source will get it's green from that the_cache\")\n```\n\n the_cache: {}\n Getting green...\n the_cache: {'green': 'eggs'}\n ... so the next time the_cached_source will get it's green from that the_cache\n\n\nBut now, you'll notice a slight problem ahead. What exactly does our source store (or rather reader) looks like? In it's raw form it would take urls as it's keys, and the response of a request as it's value. That store wouldn't have an `__iter__` for sure (unless you're Google). But more to the point here, the `mk_cached_store` tool uses the same key for the source and the cache, and we can't just use the url as is, to be a local file path. \n\nThere's many ways we could solve this. One way is to add a key map layer on the cache store, so externally, it speaks the url key language, but internally it will map that url to a valid local file path. We've been there, we got the T-shirt!\n\nBut what we're going to do is a bit different: We're going to do the key mapping in the source store itself. It seems to make more sense in our context: We have a data source of `name: data` pairs, and if we impose that the name should be a valid file name, we don't need to have a key map in the cache store.\n\nSo let's start by building this `MyDataStore` store. We'll start by defining the functions that get us the data we want. \n\n\n```python\ndef country_flag_image_url():\n import pandas as pd\n return pd.read_csv(\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv')\n\ndef kaggle_coronavirus_dataset():\n import kaggle\n from io import BytesIO\n # didn't find the pure binary download function, so using temp dir to emulate\n from tempfile import mkdtemp \n download_dir = mkdtemp()\n filename = 'novel-corona-virus-2019-dataset.zip'\n zip_file = os.path.join(download_dir, filename)\n \n dataset = 'sudalairajkumar/novel-corona-virus-2019-dataset'\n kaggle.api.dataset_download_files(dataset, download_dir)\n with open(zip_file, 'rb') as fp:\n b = fp.read()\n return BytesIO(b)\n\ndef city_population_in_time():\n import pandas as pd\n return pd.read_csv(\n 'https://gist.githubusercontent.com/johnburnmurdoch/'\n '4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'\n )\n```\n\nNow we can make a store that simply uses these function names as the keys, and their returned value as the values.\n\n\n```python\nfrom py2store.base import KvReader\nfrom functools import lru_cache\n\nclass FuncReader(KvReader):\n _getitem_cache_size = 999\n def __init__(self, funcs):\n # TODO: assert no free arguments (arguments are allowed but must all have defaults)\n self.funcs = funcs\n self._func_of_name = {func.__name__: func for func in funcs}\n\n def __contains__(self, k):\n return k in self._func_of_name\n \n def __iter__(self):\n yield from self._func_of_name\n \n def __len__(self):\n return len(self._func_of_name)\n\n @lru_cache(maxsize=_getitem_cache_size)\n def __getitem__(self, k):\n return self._func_of_name[k]() # call the func\n \n def __hash__(self):\n return 1\n \n```\n\n\n```python\ndata_sources = FuncReader([country_flag_image_url, kaggle_coronavirus_dataset, city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['city_population_in_time']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
namegroupyearvaluesubGroupcity_idlastValuelatlon
0AgraIndia1575200.0IndiaAgra - India200.027.1833378.01667
1AgraIndia1576212.0IndiaAgra - India200.027.1833378.01667
2AgraIndia1577224.0IndiaAgra - India212.027.1833378.01667
3AgraIndia1578236.0IndiaAgra - India224.027.1833378.01667
4AgraIndia1579248.0IndiaAgra - India236.027.1833378.01667
..............................
6247VijayanagarIndia1561480.0IndiaVijayanagar - India480.015.3350076.46200
6248VijayanagarIndia1562480.0IndiaVijayanagar - India480.015.3350076.46200
6249VijayanagarIndia1563480.0IndiaVijayanagar - India480.015.3350076.46200
6250VijayanagarIndia1564480.0IndiaVijayanagar - India480.015.3350076.46200
6251VijayanagarIndia1565480.0IndiaVijayanagar - India480.015.3350076.46200
\n

6252 rows \u00d7 9 columns

\n
\n\n\n\nBut we wanted this all to be cached locally, right? So a few more lines to do that!\n\n\n```python\nfrom py2store.caching import mk_cached_store\nfrom py2store import QuickPickleStore\n \nmy_local_cache = os.path.expanduser('~/ddir/my_sources')\n\nCachedFuncReader = mk_cached_store(FuncReader, QuickPickleStore(my_local_cache))\n```\n\n\n```python\ndata_sources = CachedFuncReader([country_flag_image_url, kaggle_coronavirus_dataset, city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['city_population_in_time']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
namegroupyearvaluesubGroupcity_idlastValuelatlon
0AgraIndia1575200.0IndiaAgra - India200.027.1833378.01667
1AgraIndia1576212.0IndiaAgra - India200.027.1833378.01667
2AgraIndia1577224.0IndiaAgra - India212.027.1833378.01667
3AgraIndia1578236.0IndiaAgra - India224.027.1833378.01667
4AgraIndia1579248.0IndiaAgra - India236.027.1833378.01667
..............................
6247VijayanagarIndia1561480.0IndiaVijayanagar - India480.015.3350076.46200
6248VijayanagarIndia1562480.0IndiaVijayanagar - India480.015.3350076.46200
6249VijayanagarIndia1563480.0IndiaVijayanagar - India480.015.3350076.46200
6250VijayanagarIndia1564480.0IndiaVijayanagar - India480.015.3350076.46200
6251VijayanagarIndia1565480.0IndiaVijayanagar - India480.015.3350076.46200
\n

6252 rows \u00d7 9 columns

\n
\n\n\n\n\n```python\nz = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\nlist(z)\n```\n", "long_description_content_type": "text/markdown", "description_file": "README.md", "root_url": "https://github.com/thorwhalen", "description": "A medley of things that got coded because there was an itch to do so", "author": "thorwhalen", "license": "Apache Software License", "description-file": "README.md", "install_requires": [], "keywords": [ "documentation", "packaging", "publishing" ] }/usr/lib/python3.13/site-packages/setuptools/dist.py:476: SetuptoolsDeprecationWarning: Invalid dash-separated options !! ******************************************************************************** Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead. This deprecation is overdue, please update your project and remove deprecated calls to avoid build errors in the future. See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details. ******************************************************************************** !! opt = self.warn_dash_deprecation(opt, section) /usr/lib/python3.13/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'description_file' warnings.warn(msg) /usr/lib/python3.13/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'root_url' warnings.warn(msg) /usr/lib/python3.13/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'description-file' warnings.warn(msg) -------------------------------------------------------------------- running egg_info writing tapyoca.egg-info/PKG-INFO writing dependency_links to tapyoca.egg-info/dependency_links.txt writing top-level names to tapyoca.egg-info/top_level.txt reading manifest file 'tapyoca.egg-info/SOURCES.txt' adding license file 'LICENSE' writing manifest file 'tapyoca.egg-info/SOURCES.txt' Handling wheel from get_requires_for_build_wheel Requirement not satisfied: wheel Exiting dependency generation pass: get_requires_for_build_wheel + cat /builddir/build/BUILD/python-tapyoca-0.0.4-build/python-tapyoca-0.0.4-1.fc41.x86_64-pyproject-buildrequires + rm -rfv '*.dist-info/' + RPM_EC=0 ++ jobs -p + exit 0 Wrote: /builddir/build/SRPMS/python-tapyoca-0.0.4-1.fc41.buildreqs.nosrc.rpm INFO: Going to install missing dynamic buildrequires Updating and loading repositories: fedora 100% | 1.0 MiB/s | 30.5 KiB | 00m00s updates 100% | 932.9 KiB/s | 28.9 KiB | 00m00s Copr repository 100% | 96.3 KiB/s | 1.5 KiB | 00m00s Repositories loaded. Package "pyproject-rpm-macros-1.17.0-1.fc41.noarch" is already installed. Package "python3-devel-3.13.2-1.fc41.x86_64" is already installed. Package "python3-packaging-24.2-3.fc41.noarch" is already installed. Package "python3-pip-24.2-1.fc41.noarch" is already installed. Package "python3-setuptools-69.2.0-8.fc41.noarch" is already installed. Total size of inbound packages is 166 KiB. Need to download 166 KiB. After this operation, 516 KiB extra will be used (install 516 KiB, remove 0 B). Package Arch Version Repository Size Installing: python3-wheel noarch 1:0.43.0-4.fc41 fedora 516.1 KiB Transaction Summary: Installing: 1 package [1/1] python3-wheel-1:0.43.0-4.fc41.noa 100% | 16.2 MiB/s | 165.8 KiB | 00m00s -------------------------------------------------------------------------------- [1/1] Total 100% | 1.6 MiB/s | 165.8 KiB | 00m00s Running transaction [1/3] Verify package files 100% | 0.0 B/s | 1.0 B | 00m00s [2/3] Prepare transaction 100% | 250.0 B/s | 1.0 B | 00m00s [3/3] Installing python3-wheel-1:0.43.0 100% | 52.3 MiB/s | 535.1 KiB | 00m00s Complete! Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1740787200 Executing(%generate_buildrequires): /bin/sh -e /var/tmp/rpm-tmp.YqfYbu + umask 022 + cd /builddir/build/BUILD/python-tapyoca-0.0.4-build + cd tapyoca-0.0.4 + echo pyproject-rpm-macros + echo python3-devel + echo 'python3dist(packaging)' + echo 'python3dist(pip) >= 19' + '[' -f pyproject.toml ']' + '[' -f setup.py ']' + echo 'python3dist(setuptools) >= 40.8' + rm -rfv '*.dist-info/' + '[' -f /usr/bin/python3 ']' + mkdir -p /builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/.pyproject-builddir + echo -n + CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + CXXFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + FFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + FCFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + VALAFLAGS=-g + RUSTFLAGS='-Copt-level=3 -Cdebuginfo=2 -Ccodegen-units=1 -Cstrip=none -Cforce-frame-pointers=yes --cap-lints=warn' + LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 ' + LT_SYS_LIBRARY_PATH=/usr/lib64: + CC=gcc + CXX=g++ + TMPDIR=/builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/.pyproject-builddir + RPM_TOXENV=py313 + HOSTNAME=rpmbuild + /usr/bin/python3 -Bs /usr/lib/rpm/redhat/pyproject_buildrequires.py --generate-extras --python3_pkgversion 3 --wheeldir /builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/pyproject-wheeldir --output /builddir/build/BUILD/python-tapyoca-0.0.4-build/python-tapyoca-0.0.4-1.fc41.x86_64-pyproject-buildrequires Handling setuptools >= 40.8 from default build backend Requirement satisfied: setuptools >= 40.8 (installed: setuptools 69.2.0) !!!! containing_folder_name=tapyoca-0.0.4 but setup name is tapyoca Setup params ------------------------------------------------------- { "name": "tapyoca", "version": "0.0.4", "url": "https://github.com/thorwhalen/tapyoca", "packages": [ "tapyoca", "tapyoca.agglutination", "tapyoca.covid", "tapyoca.darpa", "tapyoca.demonyms", "tapyoca.indexing_podcasts", "tapyoca.parquet_deformations", "tapyoca.phoneming" ], "include_package_data": true, "platforms": "any", "long_description": "# tapyoca\nA medley of small projects\n\n\n# parquet_deformations\n\nI'm calling these [Parquet deformations](https://www.theguardian.com/artanddesign/alexs-adventures-in-numberland/2014/sep/09/crazy-paving-the-twisted-world-of-parquet-deformations#:~:text=In%20the%201960s%20an%20American,the%20regularity%20of%20the%20tiling.) but purest would lynch me. \n\nReally, I just wanted to transform one word into another word, gradually, as I've seen in some of [Escher's](https://en.wikipedia.org/wiki/M._C._Escher) work, so I looked it up, and saw that it's called parquet deformations. The math looked enticing, but I had no time for that, so I did the first way I could think of: Mapping pixels to pixels (in some fashion -- but nearest neighbors is the method that yields nicest results, under the pixel-level restriction). \n\nOf course, this can be applied to any image (that will be transformed to B/W (not even gray -- I mean actual B/W), and there's several ways you can perform the parquet (I like the gif rendering). \n\nThe main function (exposed as a script) is `mk_deformation_image`. All you need is to specify two images (or words). If you want, of course, you can specify:\n- `n_steps`: Number of steps from start to end image\n- `save_to_file`: path to file to save too (if not given, will just return the image object)\n- `kind`: 'gif', 'horizontal_stack', or 'vertical_stack'\n- `coordinate_mapping_maker`: A function that will return the mapping between start and end. \nThis function should return a pair (`from_coord`, `to_coord`) of aligned matrices whose 2 columns are the the \n`(x, y)` coordinates, and the rows represent aligned positions that should be mapped. \n\n\n\n## Examples\n\n### Two words...\n\n\n```python\nfit_to_size = 400\nstart_im = image_of_text('sensor').rotate(90, expand=1)\nend_im = image_of_text('meaning').rotate(90, expand=1)\nstart_and_end_image(start_im, end_im)\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_5_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 15, kind='h').resize((500,200))\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_6_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im.transpose(4), end_im.transpose(4), 5, kind='v').resize((200,200))\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_7_0.png)\n\n\n\n\n```python\nf = 'sensor_meaning_knn.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nf = 'sensor_meaning_scan.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f, \n coordinate_mapping_maker='scan')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nf = 'sensor_meaning_random.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f, \n coordinate_mapping_maker='random')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n### From a list of words\n\n\n```python\nstart_words = ['sensor', 'vibration', 'tempature']\nend_words = ['sense', 'meaning', 'detection']\nstart_im, end_im = make_start_and_end_images_with_words(\n start_words, end_words, perm=True, repeat=2, size=150)\nstart_and_end_image(start_im, end_im).resize((600, 200))\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_12_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 5)\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_13_0.png)\n\n\n\n\n```python\nf = 'bunch_of_words.gif'\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n## From files\n\n\n```python\nstart_im = Image.open('sensor_strip_01.png')\nend_im = Image.open('sense_strip_01.png')\nstart_and_end_image(start_im.resize((200, 500)), end_im.resize((200, 500)))\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_16_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 7)\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_17_0.png)\n\n\n\n\n```python\nf = 'medley.gif'\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f, coordinate_mapping_maker='scan')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n## an image and some text\n\n\n```python\nstart_im = 'img/waveform_01.png' # will first look for a file, and if not consider as text\nend_im = 'makes sense'\n\nmk_gif_of_deformations(start_im, end_im, n_steps=20, \n save_to_file='image_and_text.gif')\ndisplay_gif('image_and_text.gif') \n```\n\n\n\n\n\n\n\n\n\n\n\n# demonys\n\n## What do we think about other peoples?\n\nThis project is meant to get an idea of what people think of people for different nations, as seen by what they ask google about them. \n\nHere I use python code to acquire, clean up, and analyze the data. \n\n### Demonym\n\nIf you're like me and enjoy the false and fleeting impression of superiority that comes when you know a word someone else doesn't. If you're like me and go to parties for the sole purpose of seeking victims to get a one-up on, here's a cool word to add to your arsenal:\n\n**demonym**: a noun used to denote the natives or inhabitants of a particular country, state, city, etc.\n_\"he struggled for the correct demonym for the people of Manchester\"_\n\n### Back-story of this analysis\n \nDuring a discussion (about traveling in Europe) someone said \"why are the swiss so miserable\". Now, I wouldn't say that the swiss were especially miserable (a couple of ex-girlfriends aside), but to be fair he was contrasting with Italians, so perhaps he has a point. I apologize if you are swiss, or one of the two ex-girlfriends -- nothing personal, this is all for effect. \n\nWe googled \"why are the swiss so \", and sure enough, \"why are the swiss so miserable\" came up as one of the suggestions. So we got curious and started googling other peoples: the French, the Germans, etc.\n\nThat's the back-story of this analysis. This analysis is meant to get an idea of what we think of peoples from other countries. Of course, one can rightfully critique the approach I'll take to gauge \"what we think\" -- all three of these words should, but will not, be defined. I'm just going to see what google's *current* auto-suggest comes back with when I enter \"why are the X so \" (where X will be a noun that denotes the natives of inhabitants of a particular country; a *demonym* if you will). \n\n### Warning\n\nAgain, word of warning: All data and analyses are biased. \nTake everything you'll read here (and to be fair, what you read anywhere) with a grain of salt. \nFor simplicitly I'll saying things like \"what we think of...\" or \"who do we most...\", etc.\nBut I don't **really** mean that.\n\n### Resources\n\n* http://www.geography-site.co.uk/pages/countries/demonyms.html for my list of demonyms.\n* google for my suggestion engine, using the url prefix: `http://suggestqueries.google.com/complete/search?client=chrome&q=`\n\n\n## The results\n\n### In a nutshell\n\nBelow is listed 73 demonyms along with words extracted from the very first google suggestion when you type. \n\n`why are the DEMONYM so `\n\n```text\nafghan \t eyes beautiful\nalbanian \t beautiful\namerican \t girl dolls expensive\naustralian\t tall\nbelgian \t fries good\nbhutanese \t happy\nbrazilian \t good at football\nbritish \t full of grief and despair\nbulgarian \t properties cheap\nburmese \t cats affectionate\ncambodian \t cows skinny\ncanadian \t nice\nchinese \t healthy\ncolombian \t avocados big\ncuban \t cigars good\nczech \t tall\ndominican \t republic and haiti different\negyptian \t gods important\nenglish \t reserved\neritrean \t beautiful\nethiopian \t beautiful\nfilipino \t proud\nfinn \t shoes expensive\nfrench \t healthy\ngerman \t tall\ngreek \t gods messed up\nhaitian \t parents strict\nhungarian \t words long\nindian \t tv debates chaotic\nindonesian\t smart\niranian \t beautiful\nisraeli \t startups successful\nitalian \t short\njamaican \t sprinters fast\njapanese \t polite\nkenyan \t runners good\nlebanese \t rich\nmalagasy \t names long\nmalaysian \t drivers bad\nmaltese \t rude\nmongolian \t horses small\nmoroccan \t rugs expensive\nnepalese \t beautiful\nnigerian \t tall\nnorth korean\t hats big\nnorwegian \t flights cheap\npakistani \t fair\nperuvian \t blueberries big\npole \t vaulters hot\nportuguese\t short\npuerto rican\t and cuban flags similar\nromanian \t beautiful\nrussian \t good at math\nsamoan \t big\nsaudi \t arrogant\nscottish \t bitter\nsenegalese\t tall\nserbian \t tall\nsingaporean\t rude\nsomali \t parents strict\nsouth african\t plugs big\nsouth korean\t tall\nsri lankan\t dark\nsudanese \t tall\nswiss \t good at making watches\nsyrian \t families large\ntaiwanese \t pretty\nthai \t pretty\ntongan \t big\nukrainian \t beautiful\nvietnamese\t fiercely nationalistic\nwelsh \t dark\nzambian \t emeralds cheap\n```\n\n\nNotes:\n* The queries actually have a space after the \"so\", which matters so as to omit suggestions containing words that start with so.\n* Only the tail of the suggestion is shown -- minus prefix (`why are the DEMONYM` or `why are DEMONYM`) as well as the `so`, where ever it lands in the suggestion. \nFor example, the first suggestion for the american demonym was \"why are american dolls so expensive\", which results in the \"dolls expensive\" association. \n\n\n### Who do we most talk/ask about?\n\nThe original list contained 217 demonyms, but many of these yielded no suggestions (to the specific query format I used, that is). \nOnly 73 demonyms gave me at least one suggestion. \nBut within those, number of suggestions range between 1 and 20 (which is probably the default maximum number of suggestions for the API I used). \nSo, pretending that the number of suggestions is an indicator of how much we have to say, or how many different opinions we have, of each of the covered nationalities, \nhere's the top 15 demonyms people talk about, with the corresponding number of suggestions \n(proxy for \"the number of different things people ask about the said nationality). \n\n```text\nfrench 20\nsingaporean 20\ngerman 20\nbritish 20\nswiss 20\nenglish 19\nitalian 18\ncuban 18\ncanadian 18\nwelsh 18\naustralian 17\nmaltese 16\namerican 16\njapanese 14\nscottish 14\n```\n\n### Who do we least talk/ask about?\n\nConversely, here are the 19 demonyms that came back with only one suggestion.\n\n```text\nsomali 1\nbhutanese 1\nsyrian 1\ntongan 1\ncambodian 1\nmalagasy 1\nsaudi 1\nserbian 1\nczech 1\neritrean 1\nfinn 1\npuerto rican 1\npole 1\nhaitian 1\nhungarian 1\nperuvian 1\nmoroccan 1\nmongolian 1\nzambian 1\n```\n\n### What do we think about people?\n\nWhy are the French so...\n\nHow would you (if you're (un)lucky enough to know the French) finish this sentence?\nYou might even have several opinions about the French, and any other group of people you've rubbed shoulders with.\nWhat words would your palette contain to describe different nationalities?\nWhat words would others (at least those that ask questions to google) use?\n\nWell, here's what my auto-suggest search gave me. A set of 357 unique words and expressions to describe the 72 nationalities. \nSo a long tail of words use only for one nationality. But some words occur for more than one nationality. \nHere are the top 12 words/expressions used to describe people of the world. \n\n```text\nbeautiful 11\ntall 11\nshort 9\nnames long 8\nproud 8\nparents strict 8\nsmart 8\nnice 7\nboring 6\nrich 5\ndark 5\nsuccessful 5\n```\n\n### Who is beautiful? Who is tall? Who is short? Who is smart?\n\n```text\nbeautiful : albanian, eritrean, ethiopian, filipino, iranian, lebanese, nepalese, pakistani, romanian, ukrainian, vietnamese\ntall : australian, czech, german, nigerian, pakistani, samoan, senegalese, serbian, south korean, sudanese, taiwanese\nshort : filipino, indonesian, italian, maltese, nepalese, pakistani, portuguese, singaporean, welsh\nnames long : indian, malagasy, nigerian, portuguese, russian, sri lankan, thai, welsh\nproud : albanian, ethiopian, filipino, iranian, lebanese, portuguese, scottish, welsh\nparents strict : albanian, ethiopian, haitian, indian, lebanese, pakistani, somali, sri lankan\nsmart : indonesian, iranian, lebanese, pakistani, romanian, singaporean, taiwanese, vietnamese\nnice : canadian, english, filipino, nepalese, portuguese, taiwanese, thai\nboring : british, english, french, german, singaporean, swiss\nrich : lebanese, pakistani, singaporean, taiwanese, vietnamese\ndark : filipino, senegalese, sri lankan, vietnamese, welsh\nsuccessful : chinese, english, japanese, lebanese, swiss\n```\n\n## How did I do it?\n\nI scraped a list of (country, demonym) pairs from a table in http://www.geography-site.co.uk/pages/countries/demonyms.html.\n\nThen I diagnosed these and manually made a mapping to simplify some \"complex\" entries, \nsuch as mapping an entry such as \"Irishman or Irishwoman or Irish\" to \"Irish\".\n\nUsing the google suggest API (http://suggestqueries.google.com/complete/search?client=chrome&q=), I requested what the suggestions \nfor `why are the $demonym so ` query pattern, for `$demonym` running through all 217 demonyms from the list above, \nstoring the results for each if the results were non-empty. \n\nThen, it was just a matter of pulling this data into memory, formatting it a bit, and creating a pandas dataframe that I could then interrogate.\n \n## Resources you can find here\n\nThe code to do this analysis yourself, from scratch here: `data_acquisition.py`.\n\nThe jupyter notebook I actually used when I developed this: `01 - Demonyms and adjectives - why are the french so....ipynb`\n \nNote you'll need to pip install py2store if you haven't already.\n\nIn the `data` folder you'll find\n* country_demonym.p: A pickle of a dataframe of countries and corresponding demonyms\n* country_demonym.xlsx: The same as above, but in excel form\n* demonym_suggested_characteristics.p: A pickle of 73 demonyms and auto-suggestion information, including characteristics. \n* what_we_think_about_demonyns.xlsx: An excel containing various statistics about demonyms and their (perceived) characteristics\n \n\n\n\n\n\n# Agglutinations\n\nInspired from a [tweet](https://twitter.com/raymondh/status/1311003482531401729) from Raymond Hettinger this morning:\n\n_Resist the urge to elide the underscore in multiword function or method names_\n\nSo I wondered...\n\n## Gluglus\n\nThe gluglu of a word is the number of partitions you can make of that word into words (of length at least 2 (so no using a or i)).\n(No \"gluglu\" isn't an actual term -- unless everyone starts using it from now on. \nBut it was inspired from an actual [linguistic term](https://en.wikipedia.org/wiki/Agglutination).)\n\nFor example, the gluglu of ``newspaper`` is 4:\n\n```\nnewspaper\n new spa per\n news pa per\n news paper\n```\n\nEvery (valid) word has gluglu at least 1.\n\n\n## How many standard library names have gluglus at last 2?\n\n108\n\nHere's [the list](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/standard_lib_gluglus.txt) of all of them.\n\nThe winner has a gluglu of 6 (not 7 because formatannotationrelativeto isn't in the dictionary)\n\n```\nformatannotationrelativeto\n\tfor mat an not at ion relative to\n\tfor mat annotation relative to\n\tform at an not at ion relative to\n\tform at annotation relative to\n\tformat an not at ion relative to\n\tformat annotation relative to\n```\n\n## Details\n\n### Dictionary\n\nReally it depends on what dictionary we use. \nHere, I used a very conservative one. \nThe intersection of two lists: The [corncob](http://www.mieliestronk.com/corncob_lowercase.txt) \nand the [google10000](https://raw.githubusercontent.com/first20hours/google-10000-english/master/google-10000-english-usa.txt) word lists.\nAdditionally, I only kept of those, those that had at least 2 letters, and had only letters (no hyphens or disturbing diacritics).\n\nDiacritics. Look it up. Impress your next nerd date.\n\nIm left with 8116 words. You can find them [here](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/words_8116.csv).\n\n### Standard Lib Names\n\nSurprisingly, that was the hardest part. I know I'm missing some, but that's enough rabbit-holing. \n\nWhat I did (modulo some exceptions I won't look into) was to walk the standard lib modules (even that list wasn't a given!) \nextracting (recursively( the names of any (non-underscored) attributes if they were modules or callables, \nas well as extracting the arguments of these callables (when they had signatures).\n\nYou can find the code I used to extract these names [here](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/py_names.py) \nand the actual list [there](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/standard_lib_module_names.csv).\n\n\n\n# covid\n\n## Bar Chart Races (applied to covid-19 spread)\n\nThe module will show is how to make these:\n- Confirmed cases (by country): https://public.flourish.studio/visualisation/1704821/\n- Deaths (by country): https://public.flourish.studio/visualisation/1705644/\n- US Confirmed cases (by state): https://public.flourish.studio/visualisation/1794768/\n- US Deaths (by state): https://public.flourish.studio/visualisation/1794797/\n\n### The script\n\nIf you just want to run this as a script to get the job done, you have one here: \nhttps://raw.githubusercontent.com/thorwhalen/tapyoca/master/covid/covid_bar_chart_race.py\n\nRun like this\n```\n$ python covid_bar_chart_race.py -h\nusage: covid_bar_chart_race.py [-h] {mk-and-save-covid-data,update-covid-data,instructions-to-make-bar-chart-race} ...\n\npositional arguments:\n {mk-and-save-covid-data,update-covid-data,instructions-to-make-bar-chart-race}\n mk-and-save-covid-data\n :param data_sources: Dirpath or py2store Store where the data is :param kinds: The kinds of data you want to compute and save :param\n skip_first_days: :param verbose: :return:\n update-covid-data update the coronavirus data\n instructions-to-make-bar-chart-race\n\noptional arguments:\n -h, --help show this help message and exit\n ```\n \n \n### The jupyter notebook\n\nThe notebook (the .ipynb file) shows you how to do it step by step in case you want to reuse the methods for other stuff.\n\n\n\n## Getting and preparing the data\n\nCorona virus data here: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset (direct download: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset/download). It's currently updated daily, so download a fresh copy if you want.\n\nPopulation data here: http://api.worldbank.org/v2/en/indicator/SP.POP.TOTL?downloadformat=csv\n\nIt comes under the form of a zip file (currently named `novel-corona-virus-2019-dataset.zip` with several `.csv` files in them. We use `py2store` (To install: `pip install py2store`. Project lives here: https://github.com/i2mint/py2store) to access and pre-prepare it. It allows us to not have to unzip the file and replace the older folder with it every time we download a new one. It also gives us the csvs as `pandas.DataFrame` already. \n\n\n```python\nimport pandas as pd\nfrom io import BytesIO\nfrom py2store import kv_wrap, ZipReader # google it and pip install it\nfrom py2store.caching import mk_cached_store\nfrom py2store import QuickPickleStore\nfrom py2store.sources import FuncReader\n\ndef country_flag_image_url():\n import pandas as pd\n return pd.read_csv(\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv')\n\ndef kaggle_coronavirus_dataset():\n import kaggle\n from io import BytesIO\n # didn't find the pure binary download function, so using temp dir to emulate\n from tempfile import mkdtemp \n download_dir = mkdtemp()\n filename = 'novel-corona-virus-2019-dataset.zip'\n zip_file = os.path.join(download_dir, filename)\n \n dataset = 'sudalairajkumar/novel-corona-virus-2019-dataset'\n kaggle.api.dataset_download_files(dataset, download_dir)\n with open(zip_file, 'rb') as fp:\n b = fp.read()\n return BytesIO(b)\n\ndef city_population_in_time():\n import pandas as pd\n return pd.read_csv(\n 'https://gist.githubusercontent.com/johnburnmurdoch/'\n '4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'\n )\n\ndef country_flag_image_url_prep(df: pd.DataFrame):\n # delete the region col (we don't need it)\n del df['region']\n # rewriting a few (not all) of the country names to match those found in kaggle covid data\n # Note: The list is not complete! Add to it as needed\n old_and_new = [('USA', 'US'), \n ('Iran, Islamic Rep.', 'Iran'), \n ('UK', 'United Kingdom'), \n ('Korea, Rep.', 'Korea, South')]\n for old, new in old_and_new:\n df['country'] = df['country'].replace(old, new)\n\n return df\n\n\n@kv_wrap.outcoming_vals(lambda x: pd.read_csv(BytesIO(x))) # this is to format the data as a dataframe\nclass ZippedCsvs(ZipReader):\n pass\n# equivalent to ZippedCsvs = kv_wrap.outcoming_vals(lambda x: pd.read_csv(BytesIO(x)))(ZipReader)\n```\n\n\n```python\n# Enter here the place you want to cache your data\nmy_local_cache = os.path.expanduser('~/ddir/my_sources')\n```\n\n\n```python\nCachedFuncReader = mk_cached_store(FuncReader, QuickPickleStore(my_local_cache))\n```\n\n\n```python\ndata_sources = CachedFuncReader([country_flag_image_url, \n kaggle_coronavirus_dataset, \n city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ncovid_datasets = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\nlist(covid_datasets)\n```\n\n\n\n\n ['COVID19_line_list_data.csv',\n 'COVID19_open_line_list.csv',\n 'covid_19_data.csv',\n 'time_series_covid_19_confirmed.csv',\n 'time_series_covid_19_confirmed_US.csv',\n 'time_series_covid_19_deaths.csv',\n 'time_series_covid_19_deaths_US.csv',\n 'time_series_covid_19_recovered.csv']\n\n\n\n\n```python\ncovid_datasets['time_series_covid_19_confirmed.csv'].head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Province/StateCountry/RegionLatLong1/22/201/23/201/24/201/25/201/26/201/27/20...3/24/203/25/203/26/203/27/203/28/203/29/203/30/203/31/204/1/204/2/20
0NaNAfghanistan33.000065.0000000000...748494110110120170174237273
1NaNAlbania41.153320.1683000000...123146174186197212223243259277
2NaNAlgeria28.03391.6596000000...264302367409454511584716847986
3NaNAndorra42.50631.5218000000...164188224267308334370376390428
4NaNAngola-11.202717.8739000000...3344577788
\n

5 rows \u00d7 76 columns

\n
\n\n\n\n\n```python\ncountry_flag_image_url = data_sources['country_flag_image_url']\ncountry_flag_image_url.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
\n
\n\n\n\n\n```python\nfrom IPython.display import Image\nflag_image_url_of_country = country_flag_image_url.set_index('country')['flag_image_url']\nImage(url=flag_image_url_of_country['Australia'])\n```\n\n\n\n\n\n\n\n\n### Update coronavirus data\n\n\n```python\n# To update the coronavirus data:\ndef update_covid_data(data_sources):\n \"\"\"update the coronavirus data\"\"\"\n if 'kaggle_coronavirus_dataset' in data_sources._caching_store:\n del data_sources._caching_store['kaggle_coronavirus_dataset'] # delete the cached item\n _ = data_sources['kaggle_coronavirus_dataset']\n\n# update_covid_data(data_sources) # uncomment here when you want to update\n```\n\n### Prepare data for flourish upload\n\n\n```python\nimport re\n\ndef print_if_verbose(verbose, *args, **kwargs):\n if verbose:\n print(*args, **kwargs)\n \ndef country_data_for_data_kind(data_sources, kind='confirmed', skip_first_days=0, verbose=False):\n \"\"\"kind can be 'confirmed', 'deaths', 'confirmed_US', 'confirmed_US', 'recovered'\"\"\"\n \n covid_datasets = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\n \n df = covid_datasets[f'time_series_covid_19_{kind}.csv']\n # df = s['time_series_covid_19_deaths.csv']\n if 'Province/State' in df.columns:\n df.loc[df['Province/State'].isna(), 'Province/State'] = 'n/a' # to avoid problems arising from NaNs\n\n print_if_verbose(verbose, f\"Before data shape: {df.shape}\")\n\n # drop some columns we don't need\n p = re.compile('\\d+/\\d+/\\d+')\n\n assert all(isinstance(x, str) for x in df.columns)\n date_cols = [x for x in df.columns if p.match(x)]\n if not kind.endswith('US'):\n df = df.loc[:, ['Country/Region'] + date_cols]\n # group countries and sum up the contributions of their states/regions/pargs\n df['country'] = df.pop('Country/Region')\n df = df.groupby('country').sum()\n else:\n df = df.loc[:, ['Province_State'] + date_cols]\n df['state'] = df.pop('Province_State')\n df = df.groupby('state').sum()\n\n \n print_if_verbose(verbose, f\"After data shape: {df.shape}\")\n df = df.iloc[:, skip_first_days:]\n \n if not kind.endswith('US'):\n # Joining with the country image urls and saving as an xls\n country_image_url = country_flag_image_url_prep(data_sources['country_flag_image_url'])\n t = df.copy()\n t.columns = [str(x)[:10] for x in t.columns]\n t = t.reset_index(drop=False)\n t = country_image_url.merge(t, how='outer')\n t = t.set_index('country')\n df = t\n else: \n pass\n\n return df\n\n\ndef mk_and_save_country_data_for_data_kind(data_sources, kind='confirmed', skip_first_days=0, verbose=False):\n t = country_data_for_data_kind(data_sources, kind, skip_first_days, verbose)\n filepath = f'country_covid_{kind}.xlsx'\n t.to_excel(filepath)\n print_if_verbose(verbose, f\"Was saved here: {filepath}\")\n\n```\n\n\n```python\n# for kind in ['confirmed', 'deaths', 'recovered', 'confirmed_US', 'deaths_US']:\nfor kind in ['confirmed', 'deaths', 'recovered', 'confirmed_US', 'deaths_US']:\n mk_and_save_country_data_for_data_kind(data_sources, kind=kind, skip_first_days=39, verbose=True)\n```\n\n Before data shape: (262, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_confirmed.xlsx\n Before data shape: (262, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_deaths.xlsx\n Before data shape: (248, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_recovered.xlsx\n Before data shape: (3253, 86)\n After data shape: (58, 75)\n Was saved here: country_covid_confirmed_US.xlsx\n Before data shape: (3253, 87)\n After data shape: (58, 75)\n Was saved here: country_covid_deaths_US.xlsx\n\n\n### Upload to Flourish, tune, and publish\n\nGo to https://public.flourish.studio/, get a free account, and play.\n\nGot to https://app.flourish.studio/templates\n\nChoose \"Bar chart race\". At the time of writing this, it was here: https://app.flourish.studio/visualisation/1706060/\n\n... and then play with the settings\n\n\n## Discussion of the methods\n\n\n```python\nfrom py2store import *\nfrom IPython.display import Image\n```\n\n### country flags images\n\nThe manual data prep looks something like this.\n\n\n```python\nimport pandas as pd\n\n# get the csv data from the url\ncountry_image_url_source = \\\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv'\ncountry_image_url = pd.read_csv(country_image_url_source)\n\n# delete the region col (we don't need it)\ndel country_image_url['region']\n\n# rewriting a few (not all) of the country names to match those found in kaggle covid data\n# Note: The list is not complete! Add to it as needed\n# TODO: (Wishful) Using a general smart soft-matching algorithm to do this automatically.\n# TODO: This could use edit-distance, synonyms, acronym generation, etc.\nold_and_new = [('USA', 'US'), \n ('Iran, Islamic Rep.', 'Iran'), \n ('UK', 'United Kingdom'), \n ('Korea, Rep.', 'Korea, South')]\nfor old, new in old_and_new:\n country_image_url['country'] = country_image_url['country'].replace(old, new)\n\nimage_url_of_country = country_image_url.set_index('country')['flag_image_url']\n\ncountry_image_url.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryflag_image_url
0Angolahttps://www.countryflags.io/ao/flat/64.png
1Burundihttps://www.countryflags.io/bi/flat/64.png
2Beninhttps://www.countryflags.io/bj/flat/64.png
3Burkina Fasohttps://www.countryflags.io/bf/flat/64.png
4Botswanahttps://www.countryflags.io/bw/flat/64.png
\n
\n\n\n\n\n```python\nImage(url=image_url_of_country['Australia'])\n```\n\n\n\n\n\n\n\n\n### Caching the flag images data\n\nDownloading our data sources every time we need them is not sustainable. What if they're big? What if you're offline or have slow internet (yes, dear future reader, even in the US, during coronavirus times!)?\n\nCaching. A \"cache aside\" read-cache. That's the word. py2store has tools for that (most of which are are caching.py). \n\nSo let's say we're going to have a local folder where we'll store various datas we download. The principle is as follows:\n\n\n```python\nfrom py2store.caching import mk_cached_store\n\nclass TheSource(dict): ...\nthe_cache = {}\nTheCacheSource = mk_cached_store(TheSource, the_cache)\n\nthe_source = TheSource({'green': 'eggs', 'and': 'ham'})\n\nthe_cached_source = TheCacheSource(the_source)\nprint(f\"the_cache: {the_cache}\")\nprint(f\"Getting green...\")\nthe_cached_source['green']\nprint(f\"the_cache: {the_cache}\")\nprint(\"... so the next time the_cached_source will get it's green from that the_cache\")\n```\n\n the_cache: {}\n Getting green...\n the_cache: {'green': 'eggs'}\n ... so the next time the_cached_source will get it's green from that the_cache\n\n\nBut now, you'll notice a slight problem ahead. What exactly does our source store (or rather reader) looks like? In it's raw form it would take urls as it's keys, and the response of a request as it's value. That store wouldn't have an `__iter__` for sure (unless you're Google). But more to the point here, the `mk_cached_store` tool uses the same key for the source and the cache, and we can't just use the url as is, to be a local file path. \n\nThere's many ways we could solve this. One way is to add a key map layer on the cache store, so externally, it speaks the url key language, but internally it will map that url to a valid local file path. We've been there, we got the T-shirt!\n\nBut what we're going to do is a bit different: We're going to do the key mapping in the source store itself. It seems to make more sense in our context: We have a data source of `name: data` pairs, and if we impose that the name should be a valid file name, we don't need to have a key map in the cache store.\n\nSo let's start by building this `MyDataStore` store. We'll start by defining the functions that get us the data we want. \n\n\n```python\ndef country_flag_image_url():\n import pandas as pd\n return pd.read_csv(\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv')\n\ndef kaggle_coronavirus_dataset():\n import kaggle\n from io import BytesIO\n # didn't find the pure binary download function, so using temp dir to emulate\n from tempfile import mkdtemp \n download_dir = mkdtemp()\n filename = 'novel-corona-virus-2019-dataset.zip'\n zip_file = os.path.join(download_dir, filename)\n \n dataset = 'sudalairajkumar/novel-corona-virus-2019-dataset'\n kaggle.api.dataset_download_files(dataset, download_dir)\n with open(zip_file, 'rb') as fp:\n b = fp.read()\n return BytesIO(b)\n\ndef city_population_in_time():\n import pandas as pd\n return pd.read_csv(\n 'https://gist.githubusercontent.com/johnburnmurdoch/'\n '4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'\n )\n```\n\nNow we can make a store that simply uses these function names as the keys, and their returned value as the values.\n\n\n```python\nfrom py2store.base import KvReader\nfrom functools import lru_cache\n\nclass FuncReader(KvReader):\n _getitem_cache_size = 999\n def __init__(self, funcs):\n # TODO: assert no free arguments (arguments are allowed but must all have defaults)\n self.funcs = funcs\n self._func_of_name = {func.__name__: func for func in funcs}\n\n def __contains__(self, k):\n return k in self._func_of_name\n \n def __iter__(self):\n yield from self._func_of_name\n \n def __len__(self):\n return len(self._func_of_name)\n\n @lru_cache(maxsize=_getitem_cache_size)\n def __getitem__(self, k):\n return self._func_of_name[k]() # call the func\n \n def __hash__(self):\n return 1\n \n```\n\n\n```python\ndata_sources = FuncReader([country_flag_image_url, kaggle_coronavirus_dataset, city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['city_population_in_time']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
namegroupyearvaluesubGroupcity_idlastValuelatlon
0AgraIndia1575200.0IndiaAgra - India200.027.1833378.01667
1AgraIndia1576212.0IndiaAgra - India200.027.1833378.01667
2AgraIndia1577224.0IndiaAgra - India212.027.1833378.01667
3AgraIndia1578236.0IndiaAgra - India224.027.1833378.01667
4AgraIndia1579248.0IndiaAgra - India236.027.1833378.01667
..............................
6247VijayanagarIndia1561480.0IndiaVijayanagar - India480.015.3350076.46200
6248VijayanagarIndia1562480.0IndiaVijayanagar - India480.015.3350076.46200
6249VijayanagarIndia1563480.0IndiaVijayanagar - India480.015.3350076.46200
6250VijayanagarIndia1564480.0IndiaVijayanagar - India480.015.3350076.46200
6251VijayanagarIndia1565480.0IndiaVijayanagar - India480.015.3350076.46200
\n

6252 rows \u00d7 9 columns

\n
\n\n\n\nBut we wanted this all to be cached locally, right? So a few more lines to do that!\n\n\n```python\nfrom py2store.caching import mk_cached_store\nfrom py2store import QuickPickleStore\n \nmy_local_cache = os.path.expanduser('~/ddir/my_sources')\n\nCachedFuncReader = mk_cached_store(FuncReader, QuickPickleStore(my_local_cache))\n```\n\n\n```python\ndata_sources = CachedFuncReader([country_flag_image_url, kaggle_coronavirus_dataset, city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['city_population_in_time']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
namegroupyearvaluesubGroupcity_idlastValuelatlon
0AgraIndia1575200.0IndiaAgra - India200.027.1833378.01667
1AgraIndia1576212.0IndiaAgra - India200.027.1833378.01667
2AgraIndia1577224.0IndiaAgra - India212.027.1833378.01667
3AgraIndia1578236.0IndiaAgra - India224.027.1833378.01667
4AgraIndia1579248.0IndiaAgra - India236.027.1833378.01667
..............................
6247VijayanagarIndia1561480.0IndiaVijayanagar - India480.015.3350076.46200
6248VijayanagarIndia1562480.0IndiaVijayanagar - India480.015.3350076.46200
6249VijayanagarIndia1563480.0IndiaVijayanagar - India480.015.3350076.46200
6250VijayanagarIndia1564480.0IndiaVijayanagar - India480.015.3350076.46200
6251VijayanagarIndia1565480.0IndiaVijayanagar - India480.015.3350076.46200
\n

6252 rows \u00d7 9 columns

\n
\n\n\n\n\n```python\nz = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\nlist(z)\n```\n", "long_description_content_type": "text/markdown", "description_file": "README.md", "root_url": "https://github.com/thorwhalen", "description": "A medley of things that got coded because there was an itch to do so", "author": "thorwhalen", "license": "Apache Software License", "description-file": "README.md", "install_requires": [], "keywords": [ "documentation", "packaging", "publishing" ] }/usr/lib/python3.13/site-packages/setuptools/dist.py:476: SetuptoolsDeprecationWarning: Invalid dash-separated options !! ******************************************************************************** Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead. This deprecation is overdue, please update your project and remove deprecated calls to avoid build errors in the future. See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details. ******************************************************************************** !! opt = self.warn_dash_deprecation(opt, section) /usr/lib/python3.13/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'description_file' warnings.warn(msg) /usr/lib/python3.13/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'root_url' warnings.warn(msg) /usr/lib/python3.13/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'description-file' warnings.warn(msg) -------------------------------------------------------------------- running egg_info writing tapyoca.egg-info/PKG-INFO writing dependency_links to tapyoca.egg-info/dependency_links.txt writing top-level names to tapyoca.egg-info/top_level.txt reading manifest file 'tapyoca.egg-info/SOURCES.txt' adding license file 'LICENSE' writing manifest file 'tapyoca.egg-info/SOURCES.txt' Handling wheel from get_requires_for_build_wheel Requirement satisfied: wheel (installed: wheel 0.43.0) !!!! containing_folder_name=tapyoca-0.0.4 but setup name is tapyoca Setup params ------------------------------------------------------- { "name": "tapyoca", "version": "0.0.4", "url": "https://github.com/thorwhalen/tapyoca", "packages": [ "tapyoca", "tapyoca.agglutination", "tapyoca.covid", "tapyoca.darpa", "tapyoca.demonyms", "tapyoca.indexing_podcasts", "tapyoca.parquet_deformations", "tapyoca.phoneming" ], "include_package_data": true, "platforms": "any", "long_description": "# tapyoca\nA medley of small projects\n\n\n# parquet_deformations\n\nI'm calling these [Parquet deformations](https://www.theguardian.com/artanddesign/alexs-adventures-in-numberland/2014/sep/09/crazy-paving-the-twisted-world-of-parquet-deformations#:~:text=In%20the%201960s%20an%20American,the%20regularity%20of%20the%20tiling.) but purest would lynch me. \n\nReally, I just wanted to transform one word into another word, gradually, as I've seen in some of [Escher's](https://en.wikipedia.org/wiki/M._C._Escher) work, so I looked it up, and saw that it's called parquet deformations. The math looked enticing, but I had no time for that, so I did the first way I could think of: Mapping pixels to pixels (in some fashion -- but nearest neighbors is the method that yields nicest results, under the pixel-level restriction). \n\nOf course, this can be applied to any image (that will be transformed to B/W (not even gray -- I mean actual B/W), and there's several ways you can perform the parquet (I like the gif rendering). \n\nThe main function (exposed as a script) is `mk_deformation_image`. All you need is to specify two images (or words). If you want, of course, you can specify:\n- `n_steps`: Number of steps from start to end image\n- `save_to_file`: path to file to save too (if not given, will just return the image object)\n- `kind`: 'gif', 'horizontal_stack', or 'vertical_stack'\n- `coordinate_mapping_maker`: A function that will return the mapping between start and end. \nThis function should return a pair (`from_coord`, `to_coord`) of aligned matrices whose 2 columns are the the \n`(x, y)` coordinates, and the rows represent aligned positions that should be mapped. \n\n\n\n## Examples\n\n### Two words...\n\n\n```python\nfit_to_size = 400\nstart_im = image_of_text('sensor').rotate(90, expand=1)\nend_im = image_of_text('meaning').rotate(90, expand=1)\nstart_and_end_image(start_im, end_im)\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_5_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 15, kind='h').resize((500,200))\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_6_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im.transpose(4), end_im.transpose(4), 5, kind='v').resize((200,200))\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_7_0.png)\n\n\n\n\n```python\nf = 'sensor_meaning_knn.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nf = 'sensor_meaning_scan.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f, \n coordinate_mapping_maker='scan')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nf = 'sensor_meaning_random.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f, \n coordinate_mapping_maker='random')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n### From a list of words\n\n\n```python\nstart_words = ['sensor', 'vibration', 'tempature']\nend_words = ['sense', 'meaning', 'detection']\nstart_im, end_im = make_start_and_end_images_with_words(\n start_words, end_words, perm=True, repeat=2, size=150)\nstart_and_end_image(start_im, end_im).resize((600, 200))\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_12_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 5)\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_13_0.png)\n\n\n\n\n```python\nf = 'bunch_of_words.gif'\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n## From files\n\n\n```python\nstart_im = Image.open('sensor_strip_01.png')\nend_im = Image.open('sense_strip_01.png')\nstart_and_end_image(start_im.resize((200, 500)), end_im.resize((200, 500)))\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_16_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 7)\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_17_0.png)\n\n\n\n\n```python\nf = 'medley.gif'\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f, coordinate_mapping_maker='scan')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n## an image and some text\n\n\n```python\nstart_im = 'img/waveform_01.png' # will first look for a file, and if not consider as text\nend_im = 'makes sense'\n\nmk_gif_of_deformations(start_im, end_im, n_steps=20, \n save_to_file='image_and_text.gif')\ndisplay_gif('image_and_text.gif') \n```\n\n\n\n\n\n\n\n\n\n\n\n# demonys\n\n## What do we think about other peoples?\n\nThis project is meant to get an idea of what people think of people for different nations, as seen by what they ask google about them. \n\nHere I use python code to acquire, clean up, and analyze the data. \n\n### Demonym\n\nIf you're like me and enjoy the false and fleeting impression of superiority that comes when you know a word someone else doesn't. If you're like me and go to parties for the sole purpose of seeking victims to get a one-up on, here's a cool word to add to your arsenal:\n\n**demonym**: a noun used to denote the natives or inhabitants of a particular country, state, city, etc.\n_\"he struggled for the correct demonym for the people of Manchester\"_\n\n### Back-story of this analysis\n \nDuring a discussion (about traveling in Europe) someone said \"why are the swiss so miserable\". Now, I wouldn't say that the swiss were especially miserable (a couple of ex-girlfriends aside), but to be fair he was contrasting with Italians, so perhaps he has a point. I apologize if you are swiss, or one of the two ex-girlfriends -- nothing personal, this is all for effect. \n\nWe googled \"why are the swiss so \", and sure enough, \"why are the swiss so miserable\" came up as one of the suggestions. So we got curious and started googling other peoples: the French, the Germans, etc.\n\nThat's the back-story of this analysis. This analysis is meant to get an idea of what we think of peoples from other countries. Of course, one can rightfully critique the approach I'll take to gauge \"what we think\" -- all three of these words should, but will not, be defined. I'm just going to see what google's *current* auto-suggest comes back with when I enter \"why are the X so \" (where X will be a noun that denotes the natives of inhabitants of a particular country; a *demonym* if you will). \n\n### Warning\n\nAgain, word of warning: All data and analyses are biased. \nTake everything you'll read here (and to be fair, what you read anywhere) with a grain of salt. \nFor simplicitly I'll saying things like \"what we think of...\" or \"who do we most...\", etc.\nBut I don't **really** mean that.\n\n### Resources\n\n* http://www.geography-site.co.uk/pages/countries/demonyms.html for my list of demonyms.\n* google for my suggestion engine, using the url prefix: `http://suggestqueries.google.com/complete/search?client=chrome&q=`\n\n\n## The results\n\n### In a nutshell\n\nBelow is listed 73 demonyms along with words extracted from the very first google suggestion when you type. \n\n`why are the DEMONYM so `\n\n```text\nafghan \t eyes beautiful\nalbanian \t beautiful\namerican \t girl dolls expensive\naustralian\t tall\nbelgian \t fries good\nbhutanese \t happy\nbrazilian \t good at football\nbritish \t full of grief and despair\nbulgarian \t properties cheap\nburmese \t cats affectionate\ncambodian \t cows skinny\ncanadian \t nice\nchinese \t healthy\ncolombian \t avocados big\ncuban \t cigars good\nczech \t tall\ndominican \t republic and haiti different\negyptian \t gods important\nenglish \t reserved\neritrean \t beautiful\nethiopian \t beautiful\nfilipino \t proud\nfinn \t shoes expensive\nfrench \t healthy\ngerman \t tall\ngreek \t gods messed up\nhaitian \t parents strict\nhungarian \t words long\nindian \t tv debates chaotic\nindonesian\t smart\niranian \t beautiful\nisraeli \t startups successful\nitalian \t short\njamaican \t sprinters fast\njapanese \t polite\nkenyan \t runners good\nlebanese \t rich\nmalagasy \t names long\nmalaysian \t drivers bad\nmaltese \t rude\nmongolian \t horses small\nmoroccan \t rugs expensive\nnepalese \t beautiful\nnigerian \t tall\nnorth korean\t hats big\nnorwegian \t flights cheap\npakistani \t fair\nperuvian \t blueberries big\npole \t vaulters hot\nportuguese\t short\npuerto rican\t and cuban flags similar\nromanian \t beautiful\nrussian \t good at math\nsamoan \t big\nsaudi \t arrogant\nscottish \t bitter\nsenegalese\t tall\nserbian \t tall\nsingaporean\t rude\nsomali \t parents strict\nsouth african\t plugs big\nsouth korean\t tall\nsri lankan\t dark\nsudanese \t tall\nswiss \t good at making watches\nsyrian \t families large\ntaiwanese \t pretty\nthai \t pretty\ntongan \t big\nukrainian \t beautiful\nvietnamese\t fiercely nationalistic\nwelsh \t dark\nzambian \t emeralds cheap\n```\n\n\nNotes:\n* The queries actually have a space after the \"so\", which matters so as to omit suggestions containing words that start with so.\n* Only the tail of the suggestion is shown -- minus prefix (`why are the DEMONYM` or `why are DEMONYM`) as well as the `so`, where ever it lands in the suggestion. \nFor example, the first suggestion for the american demonym was \"why are american dolls so expensive\", which results in the \"dolls expensive\" association. \n\n\n### Who do we most talk/ask about?\n\nThe original list contained 217 demonyms, but many of these yielded no suggestions (to the specific query format I used, that is). \nOnly 73 demonyms gave me at least one suggestion. \nBut within those, number of suggestions range between 1 and 20 (which is probably the default maximum number of suggestions for the API I used). \nSo, pretending that the number of suggestions is an indicator of how much we have to say, or how many different opinions we have, of each of the covered nationalities, \nhere's the top 15 demonyms people talk about, with the corresponding number of suggestions \n(proxy for \"the number of different things people ask about the said nationality). \n\n```text\nfrench 20\nsingaporean 20\ngerman 20\nbritish 20\nswiss 20\nenglish 19\nitalian 18\ncuban 18\ncanadian 18\nwelsh 18\naustralian 17\nmaltese 16\namerican 16\njapanese 14\nscottish 14\n```\n\n### Who do we least talk/ask about?\n\nConversely, here are the 19 demonyms that came back with only one suggestion.\n\n```text\nsomali 1\nbhutanese 1\nsyrian 1\ntongan 1\ncambodian 1\nmalagasy 1\nsaudi 1\nserbian 1\nczech 1\neritrean 1\nfinn 1\npuerto rican 1\npole 1\nhaitian 1\nhungarian 1\nperuvian 1\nmoroccan 1\nmongolian 1\nzambian 1\n```\n\n### What do we think about people?\n\nWhy are the French so...\n\nHow would you (if you're (un)lucky enough to know the French) finish this sentence?\nYou might even have several opinions about the French, and any other group of people you've rubbed shoulders with.\nWhat words would your palette contain to describe different nationalities?\nWhat words would others (at least those that ask questions to google) use?\n\nWell, here's what my auto-suggest search gave me. A set of 357 unique words and expressions to describe the 72 nationalities. \nSo a long tail of words use only for one nationality. But some words occur for more than one nationality. \nHere are the top 12 words/expressions used to describe people of the world. \n\n```text\nbeautiful 11\ntall 11\nshort 9\nnames long 8\nproud 8\nparents strict 8\nsmart 8\nnice 7\nboring 6\nrich 5\ndark 5\nsuccessful 5\n```\n\n### Who is beautiful? Who is tall? Who is short? Who is smart?\n\n```text\nbeautiful : albanian, eritrean, ethiopian, filipino, iranian, lebanese, nepalese, pakistani, romanian, ukrainian, vietnamese\ntall : australian, czech, german, nigerian, pakistani, samoan, senegalese, serbian, south korean, sudanese, taiwanese\nshort : filipino, indonesian, italian, maltese, nepalese, pakistani, portuguese, singaporean, welsh\nnames long : indian, malagasy, nigerian, portuguese, russian, sri lankan, thai, welsh\nproud : albanian, ethiopian, filipino, iranian, lebanese, portuguese, scottish, welsh\nparents strict : albanian, ethiopian, haitian, indian, lebanese, pakistani, somali, sri lankan\nsmart : indonesian, iranian, lebanese, pakistani, romanian, singaporean, taiwanese, vietnamese\nnice : canadian, english, filipino, nepalese, portuguese, taiwanese, thai\nboring : british, english, french, german, singaporean, swiss\nrich : lebanese, pakistani, singaporean, taiwanese, vietnamese\ndark : filipino, senegalese, sri lankan, vietnamese, welsh\nsuccessful : chinese, english, japanese, lebanese, swiss\n```\n\n## How did I do it?\n\nI scraped a list of (country, demonym) pairs from a table in http://www.geography-site.co.uk/pages/countries/demonyms.html.\n\nThen I diagnosed these and manually made a mapping to simplify some \"complex\" entries, \nsuch as mapping an entry such as \"Irishman or Irishwoman or Irish\" to \"Irish\".\n\nUsing the google suggest API (http://suggestqueries.google.com/complete/search?client=chrome&q=), I requested what the suggestions \nfor `why are the $demonym so ` query pattern, for `$demonym` running through all 217 demonyms from the list above, \nstoring the results for each if the results were non-empty. \n\nThen, it was just a matter of pulling this data into memory, formatting it a bit, and creating a pandas dataframe that I could then interrogate.\n \n## Resources you can find here\n\nThe code to do this analysis yourself, from scratch here: `data_acquisition.py`.\n\nThe jupyter notebook I actually used when I developed this: `01 - Demonyms and adjectives - why are the french so....ipynb`\n \nNote you'll need to pip install py2store if you haven't already.\n\nIn the `data` folder you'll find\n* country_demonym.p: A pickle of a dataframe of countries and corresponding demonyms\n* country_demonym.xlsx: The same as above, but in excel form\n* demonym_suggested_characteristics.p: A pickle of 73 demonyms and auto-suggestion information, including characteristics. \n* what_we_think_about_demonyns.xlsx: An excel containing various statistics about demonyms and their (perceived) characteristics\n \n\n\n\n\n\n# Agglutinations\n\nInspired from a [tweet](https://twitter.com/raymondh/status/1311003482531401729) from Raymond Hettinger this morning:\n\n_Resist the urge to elide the underscore in multiword function or method names_\n\nSo I wondered...\n\n## Gluglus\n\nThe gluglu of a word is the number of partitions you can make of that word into words (of length at least 2 (so no using a or i)).\n(No \"gluglu\" isn't an actual term -- unless everyone starts using it from now on. \nBut it was inspired from an actual [linguistic term](https://en.wikipedia.org/wiki/Agglutination).)\n\nFor example, the gluglu of ``newspaper`` is 4:\n\n```\nnewspaper\n new spa per\n news pa per\n news paper\n```\n\nEvery (valid) word has gluglu at least 1.\n\n\n## How many standard library names have gluglus at last 2?\n\n108\n\nHere's [the list](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/standard_lib_gluglus.txt) of all of them.\n\nThe winner has a gluglu of 6 (not 7 because formatannotationrelativeto isn't in the dictionary)\n\n```\nformatannotationrelativeto\n\tfor mat an not at ion relative to\n\tfor mat annotation relative to\n\tform at an not at ion relative to\n\tform at annotation relative to\n\tformat an not at ion relative to\n\tformat annotation relative to\n```\n\n## Details\n\n### Dictionary\n\nReally it depends on what dictionary we use. \nHere, I used a very conservative one. \nThe intersection of two lists: The [corncob](http://www.mieliestronk.com/corncob_lowercase.txt) \nand the [google10000](https://raw.githubusercontent.com/first20hours/google-10000-english/master/google-10000-english-usa.txt) word lists.\nAdditionally, I only kept of those, those that had at least 2 letters, and had only letters (no hyphens or disturbing diacritics).\n\nDiacritics. Look it up. Impress your next nerd date.\n\nIm left with 8116 words. You can find them [here](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/words_8116.csv).\n\n### Standard Lib Names\n\nSurprisingly, that was the hardest part. I know I'm missing some, but that's enough rabbit-holing. \n\nWhat I did (modulo some exceptions I won't look into) was to walk the standard lib modules (even that list wasn't a given!) \nextracting (recursively( the names of any (non-underscored) attributes if they were modules or callables, \nas well as extracting the arguments of these callables (when they had signatures).\n\nYou can find the code I used to extract these names [here](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/py_names.py) \nand the actual list [there](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/standard_lib_module_names.csv).\n\n\n\n# covid\n\n## Bar Chart Races (applied to covid-19 spread)\n\nThe module will show is how to make these:\n- Confirmed cases (by country): https://public.flourish.studio/visualisation/1704821/\n- Deaths (by country): https://public.flourish.studio/visualisation/1705644/\n- US Confirmed cases (by state): https://public.flourish.studio/visualisation/1794768/\n- US Deaths (by state): https://public.flourish.studio/visualisation/1794797/\n\n### The script\n\nIf you just want to run this as a script to get the job done, you have one here: \nhttps://raw.githubusercontent.com/thorwhalen/tapyoca/master/covid/covid_bar_chart_race.py\n\nRun like this\n```\n$ python covid_bar_chart_race.py -h\nusage: covid_bar_chart_race.py [-h] {mk-and-save-covid-data,update-covid-data,instructions-to-make-bar-chart-race} ...\n\npositional arguments:\n {mk-and-save-covid-data,update-covid-data,instructions-to-make-bar-chart-race}\n mk-and-save-covid-data\n :param data_sources: Dirpath or py2store Store where the data is :param kinds: The kinds of data you want to compute and save :param\n skip_first_days: :param verbose: :return:\n update-covid-data update the coronavirus data\n instructions-to-make-bar-chart-race\n\noptional arguments:\n -h, --help show this help message and exit\n ```\n \n \n### The jupyter notebook\n\nThe notebook (the .ipynb file) shows you how to do it step by step in case you want to reuse the methods for other stuff.\n\n\n\n## Getting and preparing the data\n\nCorona virus data here: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset (direct download: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset/download). It's currently updated daily, so download a fresh copy if you want.\n\nPopulation data here: http://api.worldbank.org/v2/en/indicator/SP.POP.TOTL?downloadformat=csv\n\nIt comes under the form of a zip file (currently named `novel-corona-virus-2019-dataset.zip` with several `.csv` files in them. We use `py2store` (To install: `pip install py2store`. Project lives here: https://github.com/i2mint/py2store) to access and pre-prepare it. It allows us to not have to unzip the file and replace the older folder with it every time we download a new one. It also gives us the csvs as `pandas.DataFrame` already. \n\n\n```python\nimport pandas as pd\nfrom io import BytesIO\nfrom py2store import kv_wrap, ZipReader # google it and pip install it\nfrom py2store.caching import mk_cached_store\nfrom py2store import QuickPickleStore\nfrom py2store.sources import FuncReader\n\ndef country_flag_image_url():\n import pandas as pd\n return pd.read_csv(\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv')\n\ndef kaggle_coronavirus_dataset():\n import kaggle\n from io import BytesIO\n # didn't find the pure binary download function, so using temp dir to emulate\n from tempfile import mkdtemp \n download_dir = mkdtemp()\n filename = 'novel-corona-virus-2019-dataset.zip'\n zip_file = os.path.join(download_dir, filename)\n \n dataset = 'sudalairajkumar/novel-corona-virus-2019-dataset'\n kaggle.api.dataset_download_files(dataset, download_dir)\n with open(zip_file, 'rb') as fp:\n b = fp.read()\n return BytesIO(b)\n\ndef city_population_in_time():\n import pandas as pd\n return pd.read_csv(\n 'https://gist.githubusercontent.com/johnburnmurdoch/'\n '4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'\n )\n\ndef country_flag_image_url_prep(df: pd.DataFrame):\n # delete the region col (we don't need it)\n del df['region']\n # rewriting a few (not all) of the country names to match those found in kaggle covid data\n # Note: The list is not complete! Add to it as needed\n old_and_new = [('USA', 'US'), \n ('Iran, Islamic Rep.', 'Iran'), \n ('UK', 'United Kingdom'), \n ('Korea, Rep.', 'Korea, South')]\n for old, new in old_and_new:\n df['country'] = df['country'].replace(old, new)\n\n return df\n\n\n@kv_wrap.outcoming_vals(lambda x: pd.read_csv(BytesIO(x))) # this is to format the data as a dataframe\nclass ZippedCsvs(ZipReader):\n pass\n# equivalent to ZippedCsvs = kv_wrap.outcoming_vals(lambda x: pd.read_csv(BytesIO(x)))(ZipReader)\n```\n\n\n```python\n# Enter here the place you want to cache your data\nmy_local_cache = os.path.expanduser('~/ddir/my_sources')\n```\n\n\n```python\nCachedFuncReader = mk_cached_store(FuncReader, QuickPickleStore(my_local_cache))\n```\n\n\n```python\ndata_sources = CachedFuncReader([country_flag_image_url, \n kaggle_coronavirus_dataset, \n city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ncovid_datasets = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\nlist(covid_datasets)\n```\n\n\n\n\n ['COVID19_line_list_data.csv',\n 'COVID19_open_line_list.csv',\n 'covid_19_data.csv',\n 'time_series_covid_19_confirmed.csv',\n 'time_series_covid_19_confirmed_US.csv',\n 'time_series_covid_19_deaths.csv',\n 'time_series_covid_19_deaths_US.csv',\n 'time_series_covid_19_recovered.csv']\n\n\n\n\n```python\ncovid_datasets['time_series_covid_19_confirmed.csv'].head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Province/StateCountry/RegionLatLong1/22/201/23/201/24/201/25/201/26/201/27/20...3/24/203/25/203/26/203/27/203/28/203/29/203/30/203/31/204/1/204/2/20
0NaNAfghanistan33.000065.0000000000...748494110110120170174237273
1NaNAlbania41.153320.1683000000...123146174186197212223243259277
2NaNAlgeria28.03391.6596000000...264302367409454511584716847986
3NaNAndorra42.50631.5218000000...164188224267308334370376390428
4NaNAngola-11.202717.8739000000...3344577788
\n

5 rows \u00d7 76 columns

\n
\n\n\n\n\n```python\ncountry_flag_image_url = data_sources['country_flag_image_url']\ncountry_flag_image_url.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
\n
\n\n\n\n\n```python\nfrom IPython.display import Image\nflag_image_url_of_country = country_flag_image_url.set_index('country')['flag_image_url']\nImage(url=flag_image_url_of_country['Australia'])\n```\n\n\n\n\n\n\n\n\n### Update coronavirus data\n\n\n```python\n# To update the coronavirus data:\ndef update_covid_data(data_sources):\n \"\"\"update the coronavirus data\"\"\"\n if 'kaggle_coronavirus_dataset' in data_sources._caching_store:\n del data_sources._caching_store['kaggle_coronavirus_dataset'] # delete the cached item\n _ = data_sources['kaggle_coronavirus_dataset']\n\n# update_covid_data(data_sources) # uncomment here when you want to update\n```\n\n### Prepare data for flourish upload\n\n\n```python\nimport re\n\ndef print_if_verbose(verbose, *args, **kwargs):\n if verbose:\n print(*args, **kwargs)\n \ndef country_data_for_data_kind(data_sources, kind='confirmed', skip_first_days=0, verbose=False):\n \"\"\"kind can be 'confirmed', 'deaths', 'confirmed_US', 'confirmed_US', 'recovered'\"\"\"\n \n covid_datasets = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\n \n df = covid_datasets[f'time_series_covid_19_{kind}.csv']\n # df = s['time_series_covid_19_deaths.csv']\n if 'Province/State' in df.columns:\n df.loc[df['Province/State'].isna(), 'Province/State'] = 'n/a' # to avoid problems arising from NaNs\n\n print_if_verbose(verbose, f\"Before data shape: {df.shape}\")\n\n # drop some columns we don't need\n p = re.compile('\\d+/\\d+/\\d+')\n\n assert all(isinstance(x, str) for x in df.columns)\n date_cols = [x for x in df.columns if p.match(x)]\n if not kind.endswith('US'):\n df = df.loc[:, ['Country/Region'] + date_cols]\n # group countries and sum up the contributions of their states/regions/pargs\n df['country'] = df.pop('Country/Region')\n df = df.groupby('country').sum()\n else:\n df = df.loc[:, ['Province_State'] + date_cols]\n df['state'] = df.pop('Province_State')\n df = df.groupby('state').sum()\n\n \n print_if_verbose(verbose, f\"After data shape: {df.shape}\")\n df = df.iloc[:, skip_first_days:]\n \n if not kind.endswith('US'):\n # Joining with the country image urls and saving as an xls\n country_image_url = country_flag_image_url_prep(data_sources['country_flag_image_url'])\n t = df.copy()\n t.columns = [str(x)[:10] for x in t.columns]\n t = t.reset_index(drop=False)\n t = country_image_url.merge(t, how='outer')\n t = t.set_index('country')\n df = t\n else: \n pass\n\n return df\n\n\ndef mk_and_save_country_data_for_data_kind(data_sources, kind='confirmed', skip_first_days=0, verbose=False):\n t = country_data_for_data_kind(data_sources, kind, skip_first_days, verbose)\n filepath = f'country_covid_{kind}.xlsx'\n t.to_excel(filepath)\n print_if_verbose(verbose, f\"Was saved here: {filepath}\")\n\n```\n\n\n```python\n# for kind in ['confirmed', 'deaths', 'recovered', 'confirmed_US', 'deaths_US']:\nfor kind in ['confirmed', 'deaths', 'recovered', 'confirmed_US', 'deaths_US']:\n mk_and_save_country_data_for_data_kind(data_sources, kind=kind, skip_first_days=39, verbose=True)\n```\n\n Before data shape: (262, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_confirmed.xlsx\n Before data shape: (262, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_deaths.xlsx\n Before data shape: (248, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_recovered.xlsx\n Before data shape: (3253, 86)\n After data shape: (58, 75)\n Was saved here: country_covid_confirmed_US.xlsx\n Before data shape: (3253, 87)\n After data shape: (58, 75)\n Was saved here: country_covid_deaths_US.xlsx\n\n\n### Upload to Flourish, tune, and publish\n\nGo to https://public.flourish.studio/, get a free account, and play.\n\nGot to https://app.flourish.studio/templates\n\nChoose \"Bar chart race\". At the time of writing this, it was here: https://app.flourish.studio/visualisation/1706060/\n\n... and then play with the settings\n\n\n## Discussion of the methods\n\n\n```python\nfrom py2store import *\nfrom IPython.display import Image\n```\n\n### country flags images\n\nThe manual data prep looks something like this.\n\n\n```python\nimport pandas as pd\n\n# get the csv data from the url\ncountry_image_url_source = \\\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv'\ncountry_image_url = pd.read_csv(country_image_url_source)\n\n# delete the region col (we don't need it)\ndel country_image_url['region']\n\n# rewriting a few (not all) of the country names to match those found in kaggle covid data\n# Note: The list is not complete! Add to it as needed\n# TODO: (Wishful) Using a general smart soft-matching algorithm to do this automatically.\n# TODO: This could use edit-distance, synonyms, acronym generation, etc.\nold_and_new = [('USA', 'US'), \n ('Iran, Islamic Rep.', 'Iran'), \n ('UK', 'United Kingdom'), \n ('Korea, Rep.', 'Korea, South')]\nfor old, new in old_and_new:\n country_image_url['country'] = country_image_url['country'].replace(old, new)\n\nimage_url_of_country = country_image_url.set_index('country')['flag_image_url']\n\ncountry_image_url.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryflag_image_url
0Angolahttps://www.countryflags.io/ao/flat/64.png
1Burundihttps://www.countryflags.io/bi/flat/64.png
2Beninhttps://www.countryflags.io/bj/flat/64.png
3Burkina Fasohttps://www.countryflags.io/bf/flat/64.png
4Botswanahttps://www.countryflags.io/bw/flat/64.png
\n
\n\n\n\n\n```python\nImage(url=image_url_of_country['Australia'])\n```\n\n\n\n\n\n\n\n\n### Caching the flag images data\n\nDownloading our data sources every time we need them is not sustainable. What if they're big? What if you're offline or have slow internet (yes, dear future reader, even in the US, during coronavirus times!)?\n\nCaching. A \"cache aside\" read-cache. That's the word. py2store has tools for that (most of which are are caching.py). \n\nSo let's say we're going to have a local folder where we'll store various datas we download. The principle is as follows:\n\n\n```python\nfrom py2store.caching import mk_cached_store\n\nclass TheSource(dict): ...\nthe_cache = {}\nTheCacheSource = mk_cached_store(TheSource, the_cache)\n\nthe_source = TheSource({'green': 'eggs', 'and': 'ham'})\n\nthe_cached_source = TheCacheSource(the_source)\nprint(f\"the_cache: {the_cache}\")\nprint(f\"Getting green...\")\nthe_cached_source['green']\nprint(f\"the_cache: {the_cache}\")\nprint(\"... so the next time the_cached_source will get it's green from that the_cache\")\n```\n\n the_cache: {}\n Getting green...\n the_cache: {'green': 'eggs'}\n ... so the next time the_cached_source will get it's green from that the_cache\n\n\nBut now, you'll notice a slight problem ahead. What exactly does our source store (or rather reader) looks like? In it's raw form it would take urls as it's keys, and the response of a request as it's value. That store wouldn't have an `__iter__` for sure (unless you're Google). But more to the point here, the `mk_cached_store` tool uses the same key for the source and the cache, and we can't just use the url as is, to be a local file path. \n\nThere's many ways we could solve this. One way is to add a key map layer on the cache store, so externally, it speaks the url key language, but internally it will map that url to a valid local file path. We've been there, we got the T-shirt!\n\nBut what we're going to do is a bit different: We're going to do the key mapping in the source store itself. It seems to make more sense in our context: We have a data source of `name: data` pairs, and if we impose that the name should be a valid file name, we don't need to have a key map in the cache store.\n\nSo let's start by building this `MyDataStore` store. We'll start by defining the functions that get us the data we want. \n\n\n```python\ndef country_flag_image_url():\n import pandas as pd\n return pd.read_csv(\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv')\n\ndef kaggle_coronavirus_dataset():\n import kaggle\n from io import BytesIO\n # didn't find the pure binary download function, so using temp dir to emulate\n from tempfile import mkdtemp \n download_dir = mkdtemp()\n filename = 'novel-corona-virus-2019-dataset.zip'\n zip_file = os.path.join(download_dir, filename)\n \n dataset = 'sudalairajkumar/novel-corona-virus-2019-dataset'\n kaggle.api.dataset_download_files(dataset, download_dir)\n with open(zip_file, 'rb') as fp:\n b = fp.read()\n return BytesIO(b)\n\ndef city_population_in_time():\n import pandas as pd\n return pd.read_csv(\n 'https://gist.githubusercontent.com/johnburnmurdoch/'\n '4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'\n )\n```\n\nNow we can make a store that simply uses these function names as the keys, and their returned value as the values.\n\n\n```python\nfrom py2store.base import KvReader\nfrom functools import lru_cache\n\nclass FuncReader(KvReader):\n _getitem_cache_size = 999\n def __init__(self, funcs):\n # TODO: assert no free arguments (arguments are allowed but must all have defaults)\n self.funcs = funcs\n self._func_of_name = {func.__name__: func for func in funcs}\n\n def __contains__(self, k):\n return k in self._func_of_name\n \n def __iter__(self):\n yield from self._func_of_name\n \n def __len__(self):\n return len(self._func_of_name)\n\n @lru_cache(maxsize=_getitem_cache_size)\n def __getitem__(self, k):\n return self._func_of_name[k]() # call the func\n \n def __hash__(self):\n return 1\n \n```\n\n\n```python\ndata_sources = FuncReader([country_flag_image_url, kaggle_coronavirus_dataset, city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['city_population_in_time']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
namegroupyearvaluesubGroupcity_idlastValuelatlon
0AgraIndia1575200.0IndiaAgra - India200.027.1833378.01667
1AgraIndia1576212.0IndiaAgra - India200.027.1833378.01667
2AgraIndia1577224.0IndiaAgra - India212.027.1833378.01667
3AgraIndia1578236.0IndiaAgra - India224.027.1833378.01667
4AgraIndia1579248.0IndiaAgra - India236.027.1833378.01667
..............................
6247VijayanagarIndia1561480.0IndiaVijayanagar - India480.015.3350076.46200
6248VijayanagarIndia1562480.0IndiaVijayanagar - India480.015.3350076.46200
6249VijayanagarIndia1563480.0IndiaVijayanagar - India480.015.3350076.46200
6250VijayanagarIndia1564480.0IndiaVijayanagar - India480.015.3350076.46200
6251VijayanagarIndia1565480.0IndiaVijayanagar - India480.015.3350076.46200
\n

6252 rows \u00d7 9 columns

\n
\n\n\n\nBut we wanted this all to be cached locally, right? So a few more lines to do that!\n\n\n```python\nfrom py2store.caching import mk_cached_store\nfrom py2store import QuickPickleStore\n \nmy_local_cache = os.path.expanduser('~/ddir/my_sources')\n\nCachedFuncReader = mk_cached_store(FuncReader, QuickPickleStore(my_local_cache))\n```\n\n\n```python\ndata_sources = CachedFuncReader([country_flag_image_url, kaggle_coronavirus_dataset, city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['city_population_in_time']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
namegroupyearvaluesubGroupcity_idlastValuelatlon
0AgraIndia1575200.0IndiaAgra - India200.027.1833378.01667
1AgraIndia1576212.0IndiaAgra - India200.027.1833378.01667
2AgraIndia1577224.0IndiaAgra - India212.027.1833378.01667
3AgraIndia1578236.0IndiaAgra - India224.027.1833378.01667
4AgraIndia1579248.0IndiaAgra - India236.027.1833378.01667
..............................
6247VijayanagarIndia1561480.0IndiaVijayanagar - India480.015.3350076.46200
6248VijayanagarIndia1562480.0IndiaVijayanagar - India480.015.3350076.46200
6249VijayanagarIndia1563480.0IndiaVijayanagar - India480.015.3350076.46200
6250VijayanagarIndia1564480.0IndiaVijayanagar - India480.015.3350076.46200
6251VijayanagarIndia1565480.0IndiaVijayanagar - India480.015.3350076.46200
\n

6252 rows \u00d7 9 columns

\n
\n\n\n\n\n```python\nz = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\nlist(z)\n```\n", "long_description_content_type": "text/markdown", "description_file": "README.md", "root_url": "https://github.com/thorwhalen", "description": "A medley of things that got coded because there was an itch to do so", "author": "thorwhalen", "license": "Apache Software License", "description-file": "README.md", "install_requires": [], "keywords": [ "documentation", "packaging", "publishing" ] } -------------------------------------------------------------------- running dist_info writing tapyoca.egg-info/PKG-INFO writing dependency_links to tapyoca.egg-info/dependency_links.txt writing top-level names to tapyoca.egg-info/top_level.txt reading manifest file 'tapyoca.egg-info/SOURCES.txt' adding license file 'LICENSE' writing manifest file 'tapyoca.egg-info/SOURCES.txt' creating '/builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/tapyoca-0.0.4.dist-info' + cat /builddir/build/BUILD/python-tapyoca-0.0.4-build/python-tapyoca-0.0.4-1.fc41.x86_64-pyproject-buildrequires + rm -rfv tapyoca-0.0.4.dist-info/ removed 'tapyoca-0.0.4.dist-info/top_level.txt' removed 'tapyoca-0.0.4.dist-info/METADATA' removed 'tapyoca-0.0.4.dist-info/LICENSE' removed directory 'tapyoca-0.0.4.dist-info/' + RPM_EC=0 ++ jobs -p + exit 0 Wrote: /builddir/build/SRPMS/python-tapyoca-0.0.4-1.fc41.buildreqs.nosrc.rpm INFO: Going to install missing dynamic buildrequires Updating and loading repositories: fedora 100% | 544.4 KiB/s | 30.5 KiB | 00m00s updates 100% | 535.6 KiB/s | 28.9 KiB | 00m00s Copr repository 100% | 128.3 KiB/s | 1.5 KiB | 00m00s Repositories loaded. Package "pyproject-rpm-macros-1.17.0-1.fc41.noarch" is already installed. Package "python3-devel-3.13.2-1.fc41.x86_64" is already installed. Package "python3-packaging-24.2-3.fc41.noarch" is already installed. Package "python3-pip-24.2-1.fc41.noarch" is already installed. Package "python3-setuptools-69.2.0-8.fc41.noarch" is already installed. Package "python3-wheel-1:0.43.0-4.fc41.noarch" is already installed. Nothing to do. Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1740787200 Executing(%generate_buildrequires): /bin/sh -e /var/tmp/rpm-tmp.5hGN9A + umask 022 + cd /builddir/build/BUILD/python-tapyoca-0.0.4-build + cd tapyoca-0.0.4 + echo pyproject-rpm-macros + echo python3-devel + echo 'python3dist(packaging)' + echo 'python3dist(pip) >= 19' + '[' -f pyproject.toml ']' + '[' -f setup.py ']' + echo 'python3dist(setuptools) >= 40.8' + rm -rfv '*.dist-info/' + '[' -f /usr/bin/python3 ']' + mkdir -p /builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/.pyproject-builddir + echo -n + CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + CXXFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + FFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + FCFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + VALAFLAGS=-g + RUSTFLAGS='-Copt-level=3 -Cdebuginfo=2 -Ccodegen-units=1 -Cstrip=none -Cforce-frame-pointers=yes --cap-lints=warn' + LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 ' + LT_SYS_LIBRARY_PATH=/usr/lib64: + CC=gcc + CXX=g++ + TMPDIR=/builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/.pyproject-builddir + RPM_TOXENV=py313 + HOSTNAME=rpmbuild + /usr/bin/python3 -Bs /usr/lib/rpm/redhat/pyproject_buildrequires.py --generate-extras --python3_pkgversion 3 --wheeldir /builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/pyproject-wheeldir --output /builddir/build/BUILD/python-tapyoca-0.0.4-build/python-tapyoca-0.0.4-1.fc41.x86_64-pyproject-buildrequires Handling setuptools >= 40.8 from default build backend Requirement satisfied: setuptools >= 40.8 (installed: setuptools 69.2.0) !!!! containing_folder_name=tapyoca-0.0.4 but setup name is tapyoca Setup params ------------------------------------------------------- { "name": "tapyoca", "version": "0.0.4", "url": "https://github.com/thorwhalen/tapyoca", "packages": [ "tapyoca", "tapyoca.agglutination", "tapyoca.covid", "tapyoca.darpa", "tapyoca.demonyms", "tapyoca.indexing_podcasts", "tapyoca.parquet_deformations", "tapyoca.phoneming" ], "include_package_data": true, "platforms": "any", "long_description": "# tapyoca\nA medley of small projects\n\n\n# parquet_deformations\n\nI'm calling these [Parquet deformations](https://www.theguardian.com/artanddesign/alexs-adventures-in-numberland/2014/sep/09/crazy-paving-the-twisted-world-of-parquet-deformations#:~:text=In%20the%201960s%20an%20American,the%20regularity%20of%20the%20tiling.) but purest would lynch me. \n\nReally, I just wanted to transform one word into another word, gradually, as I've seen in some of [Escher's](https://en.wikipedia.org/wiki/M._C._Escher) work, so I looked it up, and saw that it's called parquet deformations. The math looked enticing, but I had no time for that, so I did the first way I could think of: Mapping pixels to pixels (in some fashion -- but nearest neighbors is the method that yields nicest results, under the pixel-level restriction). \n\nOf course, this can be applied to any image (that will be transformed to B/W (not even gray -- I mean actual B/W), and there's several ways you can perform the parquet (I like the gif rendering). \n\nThe main function (exposed as a script) is `mk_deformation_image`. All you need is to specify two images (or words). If you want, of course, you can specify:\n- `n_steps`: Number of steps from start to end image\n- `save_to_file`: path to file to save too (if not given, will just return the image object)\n- `kind`: 'gif', 'horizontal_stack', or 'vertical_stack'\n- `coordinate_mapping_maker`: A function that will return the mapping between start and end. \nThis function should return a pair (`from_coord`, `to_coord`) of aligned matrices whose 2 columns are the the \n`(x, y)` coordinates, and the rows represent aligned positions that should be mapped. \n\n\n\n## Examples\n\n### Two words...\n\n\n```python\nfit_to_size = 400\nstart_im = image_of_text('sensor').rotate(90, expand=1)\nend_im = image_of_text('meaning').rotate(90, expand=1)\nstart_and_end_image(start_im, end_im)\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_5_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 15, kind='h').resize((500,200))\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_6_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im.transpose(4), end_im.transpose(4), 5, kind='v').resize((200,200))\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_7_0.png)\n\n\n\n\n```python\nf = 'sensor_meaning_knn.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nf = 'sensor_meaning_scan.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f, \n coordinate_mapping_maker='scan')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nf = 'sensor_meaning_random.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f, \n coordinate_mapping_maker='random')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n### From a list of words\n\n\n```python\nstart_words = ['sensor', 'vibration', 'tempature']\nend_words = ['sense', 'meaning', 'detection']\nstart_im, end_im = make_start_and_end_images_with_words(\n start_words, end_words, perm=True, repeat=2, size=150)\nstart_and_end_image(start_im, end_im).resize((600, 200))\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_12_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 5)\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_13_0.png)\n\n\n\n\n```python\nf = 'bunch_of_words.gif'\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n## From files\n\n\n```python\nstart_im = Image.open('sensor_strip_01.png')\nend_im = Image.open('sense_strip_01.png')\nstart_and_end_image(start_im.resize((200, 500)), end_im.resize((200, 500)))\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_16_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 7)\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_17_0.png)\n\n\n\n\n```python\nf = 'medley.gif'\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f, coordinate_mapping_maker='scan')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n## an image and some text\n\n\n```python\nstart_im = 'img/waveform_01.png' # will first look for a file, and if not consider as text\nend_im = 'makes sense'\n\nmk_gif_of_deformations(start_im, end_im, n_steps=20, \n save_to_file='image_and_text.gif')\ndisplay_gif('image_and_text.gif') \n```\n\n\n\n\n\n\n\n\n\n\n\n# demonys\n\n## What do we think about other peoples?\n\nThis project is meant to get an idea of what people think of people for different nations, as seen by what they ask google about them. \n\nHere I use python code to acquire, clean up, and analyze the data. \n\n### Demonym\n\nIf you're like me and enjoy the false and fleeting impression of superiority that comes when you know a word someone else doesn't. If you're like me and go to parties for the sole purpose of seeking victims to get a one-up on, here's a cool word to add to your arsenal:\n\n**demonym**: a noun used to denote the natives or inhabitants of a particular country, state, city, etc.\n_\"he struggled for the correct demonym for the people of Manchester\"_\n\n### Back-story of this analysis\n \nDuring a discussion (about traveling in Europe) someone said \"why are the swiss so miserable\". Now, I wouldn't say that the swiss were especially miserable (a couple of ex-girlfriends aside), but to be fair he was contrasting with Italians, so perhaps he has a point. I apologize if you are swiss, or one of the two ex-girlfriends -- nothing personal, this is all for effect. \n\nWe googled \"why are the swiss so \", and sure enough, \"why are the swiss so miserable\" came up as one of the suggestions. So we got curious and started googling other peoples: the French, the Germans, etc.\n\nThat's the back-story of this analysis. This analysis is meant to get an idea of what we think of peoples from other countries. Of course, one can rightfully critique the approach I'll take to gauge \"what we think\" -- all three of these words should, but will not, be defined. I'm just going to see what google's *current* auto-suggest comes back with when I enter \"why are the X so \" (where X will be a noun that denotes the natives of inhabitants of a particular country; a *demonym* if you will). \n\n### Warning\n\nAgain, word of warning: All data and analyses are biased. \nTake everything you'll read here (and to be fair, what you read anywhere) with a grain of salt. \nFor simplicitly I'll saying things like \"what we think of...\" or \"who do we most...\", etc.\nBut I don't **really** mean that.\n\n### Resources\n\n* http://www.geography-site.co.uk/pages/countries/demonyms.html for my list of demonyms.\n* google for my suggestion engine, using the url prefix: `http://suggestqueries.google.com/complete/search?client=chrome&q=`\n\n\n## The results\n\n### In a nutshell\n\nBelow is listed 73 demonyms along with words extracted from the very first google suggestion when you type. \n\n`why are the DEMONYM so `\n\n```text\nafghan \t eyes beautiful\nalbanian \t beautiful\namerican \t girl dolls expensive\naustralian\t tall\nbelgian \t fries good\nbhutanese \t happy\nbrazilian \t good at football\nbritish \t full of grief and despair\nbulgarian \t properties cheap\nburmese \t cats affectionate\ncambodian \t cows skinny\ncanadian \t nice\nchinese \t healthy\ncolombian \t avocados big\ncuban \t cigars good\nczech \t tall\ndominican \t republic and haiti different\negyptian \t gods important\nenglish \t reserved\neritrean \t beautiful\nethiopian \t beautiful\nfilipino \t proud\nfinn \t shoes expensive\nfrench \t healthy\ngerman \t tall\ngreek \t gods messed up\nhaitian \t parents strict\nhungarian \t words long\nindian \t tv debates chaotic\nindonesian\t smart\niranian \t beautiful\nisraeli \t startups successful\nitalian \t short\njamaican \t sprinters fast\njapanese \t polite\nkenyan \t runners good\nlebanese \t rich\nmalagasy \t names long\nmalaysian \t drivers bad\nmaltese \t rude\nmongolian \t horses small\nmoroccan \t rugs expensive\nnepalese \t beautiful\nnigerian \t tall\nnorth korean\t hats big\nnorwegian \t flights cheap\npakistani \t fair\nperuvian \t blueberries big\npole \t vaulters hot\nportuguese\t short\npuerto rican\t and cuban flags similar\nromanian \t beautiful\nrussian \t good at math\nsamoan \t big\nsaudi \t arrogant\nscottish \t bitter\nsenegalese\t tall\nserbian \t tall\nsingaporean\t rude\nsomali \t parents strict\nsouth african\t plugs big\nsouth korean\t tall\nsri lankan\t dark\nsudanese \t tall\nswiss \t good at making watches\nsyrian \t families large\ntaiwanese \t pretty\nthai \t pretty\ntongan \t big\nukrainian \t beautiful\nvietnamese\t fiercely nationalistic\nwelsh \t dark\nzambian \t emeralds cheap\n```\n\n\nNotes:\n* The queries actually have a space after the \"so\", which matters so as to omit suggestions containing words that start with so.\n* Only the tail of the suggestion is shown -- minus prefix (`why are the DEMONYM` or `why are DEMONYM`) as well as the `so`, where ever it lands in the suggestion. \nFor example, the first suggestion for the american demonym was \"why are american dolls so expensive\", which results in the \"dolls expensive\" association. \n\n\n### Who do we most talk/ask about?\n\nThe original list contained 217 demonyms, but many of these yielded no suggestions (to the specific query format I used, that is). \nOnly 73 demonyms gave me at least one suggestion. \nBut within those, number of suggestions range between 1 and 20 (which is probably the default maximum number of suggestions for the API I used). \nSo, pretending that the number of suggestions is an indicator of how much we have to say, or how many different opinions we have, of each of the covered nationalities, \nhere's the top 15 demonyms people talk about, with the corresponding number of suggestions \n(proxy for \"the number of different things people ask about the said nationality). \n\n```text\nfrench 20\nsingaporean 20\ngerman 20\nbritish 20\nswiss 20\nenglish 19\nitalian 18\ncuban 18\ncanadian 18\nwelsh 18\naustralian 17\nmaltese 16\namerican 16\njapanese 14\nscottish 14\n```\n\n### Who do we least talk/ask about?\n\nConversely, here are the 19 demonyms that came back with only one suggestion.\n\n```text\nsomali 1\nbhutanese 1\nsyrian 1\ntongan 1\ncambodian 1\nmalagasy 1\nsaudi 1\nserbian 1\nczech 1\neritrean 1\nfinn 1\npuerto rican 1\npole 1\nhaitian 1\nhungarian 1\nperuvian 1\nmoroccan 1\nmongolian 1\nzambian 1\n```\n\n### What do we think about people?\n\nWhy are the French so...\n\nHow would you (if you're (un)lucky enough to know the French) finish this sentence?\nYou might even have several opinions about the French, and any other group of people you've rubbed shoulders with.\nWhat words would your palette contain to describe different nationalities?\nWhat words would others (at least those that ask questions to google) use?\n\nWell, here's what my auto-suggest search gave me. A set of 357 unique words and expressions to describe the 72 nationalities. \nSo a long tail of words use only for one nationality. But some words occur for more than one nationality. \nHere are the top 12 words/expressions used to describe people of the world. \n\n```text\nbeautiful 11\ntall 11\nshort 9\nnames long 8\nproud 8\nparents strict 8\nsmart 8\nnice 7\nboring 6\nrich 5\ndark 5\nsuccessful 5\n```\n\n### Who is beautiful? Who is tall? Who is short? Who is smart?\n\n```text\nbeautiful : albanian, eritrean, ethiopian, filipino, iranian, lebanese, nepalese, pakistani, romanian, ukrainian, vietnamese\ntall : australian, czech, german, nigerian, pakistani, samoan, senegalese, serbian, south korean, sudanese, taiwanese\nshort : filipino, indonesian, italian, maltese, nepalese, pakistani, portuguese, singaporean, welsh\nnames long : indian, malagasy, nigerian, portuguese, russian, sri lankan, thai, welsh\nproud : albanian, ethiopian, filipino, iranian, lebanese, portuguese, scottish, welsh\nparents strict : albanian, ethiopian, haitian, indian, lebanese, pakistani, somali, sri lankan\nsmart : indonesian, iranian, lebanese, pakistani, romanian, singaporean, taiwanese, vietnamese\nnice : canadian, english, filipino, nepalese, portuguese, taiwanese, thai\nboring : british, english, french, german, singaporean, swiss\nrich : lebanese, pakistani, singaporean, taiwanese, vietnamese\ndark : filipino, senegalese, sri lankan, vietnamese, welsh\nsuccessful : chinese, english, japanese, lebanese, swiss\n```\n\n## How did I do it?\n\nI scraped a list of (country, demonym) pairs from a table in http://www.geography-site.co.uk/pages/countries/demonyms.html.\n\nThen I diagnosed these and manually made a mapping to simplify some \"complex\" entries, \nsuch as mapping an entry such as \"Irishman or Irishwoman or Irish\" to \"Irish\".\n\nUsing the google suggest API (http://suggestqueries.google.com/complete/search?client=chrome&q=), I requested what the suggestions \nfor `why are the $demonym so ` query pattern, for `$demonym` running through all 217 demonyms from the list above, \nstoring the results for each if the results were non-empty. \n\nThen, it was just a matter of pulling this data into memory, formatting it a bit, and creating a pandas dataframe that I could then interrogate.\n \n## Resources you can find here\n\nThe code to do this analysis yourself, from scratch here: `data_acquisition.py`.\n\nThe jupyter notebook I actually used when I developed this: `01 - Demonyms and adjectives - why are the french so....ipynb`\n \nNote you'll need to pip install py2store if you haven't already.\n\nIn the `data` folder you'll find\n* country_demonym.p: A pickle of a dataframe of countries and corresponding demonyms\n* country_demonym.xlsx: The same as above, but in excel form\n* demonym_suggested_characteristics.p: A pickle of 73 demonyms and auto-suggestion information, including characteristics. \n* what_we_think_about_demonyns.xlsx: An excel containing various statistics about demonyms and their (perceived) characteristics\n \n\n\n\n\n\n# Agglutinations\n\nInspired from a [tweet](https://twitter.com/raymondh/status/1311003482531401729) from Raymond Hettinger this morning:\n\n_Resist the urge to elide the underscore in multiword function or method names_\n\nSo I wondered...\n\n## Gluglus\n\nThe gluglu of a word is the number of partitions you can make of that word into words (of length at least 2 (so no using a or i)).\n(No \"gluglu\" isn't an actual term -- unless everyone starts using it from now on. \nBut it was inspired from an actual [linguistic term](https://en.wikipedia.org/wiki/Agglutination).)\n\nFor example, the gluglu of ``newspaper`` is 4:\n\n```\nnewspaper\n new spa per\n news pa per\n news paper\n```\n\nEvery (valid) word has gluglu at least 1.\n\n\n## How many standard library names have gluglus at last 2?\n\n108\n\nHere's [the list](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/standard_lib_gluglus.txt) of all of them.\n\nThe winner has a gluglu of 6 (not 7 because formatannotationrelativeto isn't in the dictionary)\n\n```\nformatannotationrelativeto\n\tfor mat an not at ion relative to\n\tfor mat annotation relative to\n\tform at an not at ion relative to\n\tform at annotation relative to\n\tformat an not at ion relative to\n\tformat annotation relative to\n```\n\n## Details\n\n### Dictionary\n\nReally it depends on what dictionary we use. \nHere, I used a very conservative one. \nThe intersection of two lists: The [corncob](http://www.mieliestronk.com/corncob_lowercase.txt) \nand the [google10000](https://raw.githubusercontent.com/first20hours/google-10000-english/master/google-10000-english-usa.txt) word lists.\nAdditionally, I only kept of those, those that had at least 2 letters, and had only letters (no hyphens or disturbing diacritics).\n\nDiacritics. Look it up. Impress your next nerd date.\n\nIm left with 8116 words. You can find them [here](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/words_8116.csv).\n\n### Standard Lib Names\n\nSurprisingly, that was the hardest part. I know I'm missing some, but that's enough rabbit-holing. \n\nWhat I did (modulo some exceptions I won't look into) was to walk the standard lib modules (even that list wasn't a given!) \nextracting (recursively( the names of any (non-underscored) attributes if they were modules or callables, \nas well as extracting the arguments of these callables (when they had signatures).\n\nYou can find the code I used to extract these names [here](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/py_names.py) \nand the actual list [there](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/standard_lib_module_names.csv).\n\n\n\n# covid\n\n## Bar Chart Races (applied to covid-19 spread)\n\nThe module will show is how to make these:\n- Confirmed cases (by country): https://public.flourish.studio/visualisation/1704821/\n- Deaths (by country): https://public.flourish.studio/visualisation/1705644/\n- US Confirmed cases (by state): https://public.flourish.studio/visualisation/1794768/\n- US Deaths (by state): https://public.flourish.studio/visualisation/1794797/\n\n### The script\n\nIf you just want to run this as a script to get the job done, you have one here: \nhttps://raw.githubusercontent.com/thorwhalen/tapyoca/master/covid/covid_bar_chart_race.py\n\nRun like this\n```\n$ python covid_bar_chart_race.py -h\nusage: covid_bar_chart_race.py [-h] {mk-and-save-covid-data,update-covid-data,instructions-to-make-bar-chart-race} ...\n\npositional arguments:\n {mk-and-save-covid-data,update-covid-data,instructions-to-make-bar-chart-race}\n mk-and-save-covid-data\n :param data_sources: Dirpath or py2store Store where the data is :param kinds: The kinds of data you want to compute and save :param\n skip_first_days: :param verbose: :return:\n update-covid-data update the coronavirus data\n instructions-to-make-bar-chart-race\n\noptional arguments:\n -h, --help show this help message and exit\n ```\n \n \n### The jupyter notebook\n\nThe notebook (the .ipynb file) shows you how to do it step by step in case you want to reuse the methods for other stuff.\n\n\n\n## Getting and preparing the data\n\nCorona virus data here: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset (direct download: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset/download). It's currently updated daily, so download a fresh copy if you want.\n\nPopulation data here: http://api.worldbank.org/v2/en/indicator/SP.POP.TOTL?downloadformat=csv\n\nIt comes under the form of a zip file (currently named `novel-corona-virus-2019-dataset.zip` with several `.csv` files in them. We use `py2store` (To install: `pip install py2store`. Project lives here: https://github.com/i2mint/py2store) to access and pre-prepare it. It allows us to not have to unzip the file and replace the older folder with it every time we download a new one. It also gives us the csvs as `pandas.DataFrame` already. \n\n\n```python\nimport pandas as pd\nfrom io import BytesIO\nfrom py2store import kv_wrap, ZipReader # google it and pip install it\nfrom py2store.caching import mk_cached_store\nfrom py2store import QuickPickleStore\nfrom py2store.sources import FuncReader\n\ndef country_flag_image_url():\n import pandas as pd\n return pd.read_csv(\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv')\n\ndef kaggle_coronavirus_dataset():\n import kaggle\n from io import BytesIO\n # didn't find the pure binary download function, so using temp dir to emulate\n from tempfile import mkdtemp \n download_dir = mkdtemp()\n filename = 'novel-corona-virus-2019-dataset.zip'\n zip_file = os.path.join(download_dir, filename)\n \n dataset = 'sudalairajkumar/novel-corona-virus-2019-dataset'\n kaggle.api.dataset_download_files(dataset, download_dir)\n with open(zip_file, 'rb') as fp:\n b = fp.read()\n return BytesIO(b)\n\ndef city_population_in_time():\n import pandas as pd\n return pd.read_csv(\n 'https://gist.githubusercontent.com/johnburnmurdoch/'\n '4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'\n )\n\ndef country_flag_image_url_prep(df: pd.DataFrame):\n # delete the region col (we don't need it)\n del df['region']\n # rewriting a few (not all) of the country names to match those found in kaggle covid data\n # Note: The list is not complete! Add to it as needed\n old_and_new = [('USA', 'US'), \n ('Iran, Islamic Rep.', 'Iran'), \n ('UK', 'United Kingdom'), \n ('Korea, Rep.', 'Korea, South')]\n for old, new in old_and_new:\n df['country'] = df['country'].replace(old, new)\n\n return df\n\n\n@kv_wrap.outcoming_vals(lambda x: pd.read_csv(BytesIO(x))) # this is to format the data as a dataframe\nclass ZippedCsvs(ZipReader):\n pass\n# equivalent to ZippedCsvs = kv_wrap.outcoming_vals(lambda x: pd.read_csv(BytesIO(x)))(ZipReader)\n```\n\n\n```python\n# Enter here the place you want to cache your data\nmy_local_cache = os.path.expanduser('~/ddir/my_sources')\n```\n\n\n```python\nCachedFuncReader = mk_cached_store(FuncReader, QuickPickleStore(my_local_cache))\n```\n\n\n```python\ndata_sources = CachedFuncReader([country_flag_image_url, \n kaggle_coronavirus_dataset, \n city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ncovid_datasets = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\nlist(covid_datasets)\n```\n\n\n\n\n ['COVID19_line_list_data.csv',\n 'COVID19_open_line_list.csv',\n 'covid_19_data.csv',\n 'time_series_covid_19_confirmed.csv',\n 'time_series_covid_19_confirmed_US.csv',\n 'time_series_covid_19_deaths.csv',\n 'time_series_covid_19_deaths_US.csv',\n 'time_series_covid_19_recovered.csv']\n\n\n\n\n```python\ncovid_datasets['time_series_covid_19_confirmed.csv'].head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Province/StateCountry/RegionLatLong1/22/201/23/201/24/201/25/201/26/201/27/20...3/24/203/25/203/26/203/27/203/28/203/29/203/30/203/31/204/1/204/2/20
0NaNAfghanistan33.000065.0000000000...748494110110120170174237273
1NaNAlbania41.153320.1683000000...123146174186197212223243259277
2NaNAlgeria28.03391.6596000000...264302367409454511584716847986
3NaNAndorra42.50631.5218000000...164188224267308334370376390428
4NaNAngola-11.202717.8739000000...3344577788
\n

5 rows \u00d7 76 columns

\n
\n\n\n\n\n```python\ncountry_flag_image_url = data_sources['country_flag_image_url']\ncountry_flag_image_url.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
\n
\n\n\n\n\n```python\nfrom IPython.display import Image\nflag_image_url_of_country = country_flag_image_url.set_index('country')['flag_image_url']\nImage(url=flag_image_url_of_country['Australia'])\n```\n\n\n\n\n\n\n\n\n### Update coronavirus data\n\n\n```python\n# To update the coronavirus data:\ndef update_covid_data(data_sources):\n \"\"\"update the coronavirus data\"\"\"\n if 'kaggle_coronavirus_dataset' in data_sources._caching_store:\n del data_sources._caching_store['kaggle_coronavirus_dataset'] # delete the cached item\n _ = data_sources['kaggle_coronavirus_dataset']\n\n# update_covid_data(data_sources) # uncomment here when you want to update\n```\n\n### Prepare data for flourish upload\n\n\n```python\nimport re\n\ndef print_if_verbose(verbose, *args, **kwargs):\n if verbose:\n print(*args, **kwargs)\n \ndef country_data_for_data_kind(data_sources, kind='confirmed', skip_first_days=0, verbose=False):\n \"\"\"kind can be 'confirmed', 'deaths', 'confirmed_US', 'confirmed_US', 'recovered'\"\"\"\n \n covid_datasets = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\n \n df = covid_datasets[f'time_series_covid_19_{kind}.csv']\n # df = s['time_series_covid_19_deaths.csv']\n if 'Province/State' in df.columns:\n df.loc[df['Province/State'].isna(), 'Province/State'] = 'n/a' # to avoid problems arising from NaNs\n\n print_if_verbose(verbose, f\"Before data shape: {df.shape}\")\n\n # drop some columns we don't need\n p = re.compile('\\d+/\\d+/\\d+')\n\n assert all(isinstance(x, str) for x in df.columns)\n date_cols = [x for x in df.columns if p.match(x)]\n if not kind.endswith('US'):\n df = df.loc[:, ['Country/Region'] + date_cols]\n # group countries and sum up the contributions of their states/regions/pargs\n df['country'] = df.pop('Country/Region')\n df = df.groupby('country').sum()\n else:\n df = df.loc[:, ['Province_State'] + date_cols]\n df['state'] = df.pop('Province_State')\n df = df.groupby('state').sum()\n\n \n print_if_verbose(verbose, f\"After data shape: {df.shape}\")\n df = df.iloc[:, skip_first_days:]\n \n if not kind.endswith('US'):\n # Joining with the country image urls and saving as an xls\n country_image_url = country_flag_image_url_prep(data_sources['country_flag_image_url'])\n t = df.copy()\n t.columns = [str(x)[:10] for x in t.columns]\n t = t.reset_index(drop=False)\n t = country_image_url.merge(t, how='outer')\n t = t.set_index('country')\n df = t\n else: \n pass\n\n return df\n\n\ndef mk_and_save_country_data_for_data_kind(data_sources, kind='confirmed', skip_first_days=0, verbose=False):\n t = country_data_for_data_kind(data_sources, kind, skip_first_days, verbose)\n filepath = f'country_covid_{kind}.xlsx'\n t.to_excel(filepath)\n print_if_verbose(verbose, f\"Was saved here: {filepath}\")\n\n```\n\n\n```python\n# for kind in ['confirmed', 'deaths', 'recovered', 'confirmed_US', 'deaths_US']:\nfor kind in ['confirmed', 'deaths', 'recovered', 'confirmed_US', 'deaths_US']:\n mk_and_save_country_data_for_data_kind(data_sources, kind=kind, skip_first_days=39, verbose=True)\n```\n\n Before data shape: (262, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_confirmed.xlsx\n Before data shape: (262, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_deaths.xlsx\n Before data shape: (248, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_recovered.xlsx\n Before data shape: (3253, 86)\n After data shape: (58, 75)\n Was saved here: country_covid_confirmed_US.xlsx\n Before data shape: (3253, 87)\n After data shape: (58, 75)\n Was saved here: country_covid_deaths_US.xlsx\n\n\n### Upload to Flourish, tune, and publish\n\nGo to https://public.flourish.studio/, get a free account, and play.\n\nGot to https://app.flourish.studio/templates\n\nChoose \"Bar chart race\". At the time of writing this, it was here: https://app.flourish.studio/visualisation/1706060/\n\n... and then play with the settings\n\n\n## Discussion of the methods\n\n\n```python\nfrom py2store import *\nfrom IPython.display import Image\n```\n\n### country flags images\n\nThe manual data prep looks something like this.\n\n\n```python\nimport pandas as pd\n\n# get the csv data from the url\ncountry_image_url_source = \\\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv'\ncountry_image_url = pd.read_csv(country_image_url_source)\n\n# delete the region col (we don't need it)\ndel country_image_url['region']\n\n# rewriting a few (not all) of the country names to match those found in kaggle covid data\n# Note: The list is not complete! Add to it as needed\n# TODO: (Wishful) Using a general smart soft-matching algorithm to do this automatically.\n# TODO: This could use edit-distance, synonyms, acronym generation, etc.\nold_and_new = [('USA', 'US'), \n ('Iran, Islamic Rep.', 'Iran'), \n ('UK', 'United Kingdom'), \n ('Korea, Rep.', 'Korea, South')]\nfor old, new in old_and_new:\n country_image_url['country'] = country_image_url['country'].replace(old, new)\n\nimage_url_of_country = country_image_url.set_index('country')['flag_image_url']\n\ncountry_image_url.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryflag_image_url
0Angolahttps://www.countryflags.io/ao/flat/64.png
1Burundihttps://www.countryflags.io/bi/flat/64.png
2Beninhttps://www.countryflags.io/bj/flat/64.png
3Burkina Fasohttps://www.countryflags.io/bf/flat/64.png
4Botswanahttps://www.countryflags.io/bw/flat/64.png
\n
\n\n\n\n\n```python\nImage(url=image_url_of_country['Australia'])\n```\n\n\n\n\n\n\n\n\n### Caching the flag images data\n\nDownloading our data sources every time we need them is not sustainable. What if they're big? What if you're offline or have slow internet (yes, dear future reader, even in the US, during coronavirus times!)?\n\nCaching. A \"cache aside\" read-cache. That's the word. py2store has tools for that (most of which are are caching.py). \n\nSo let's say we're going to have a local folder where we'll store various datas we download. The principle is as follows:\n\n\n```python\nfrom py2store.caching import mk_cached_store\n\nclass TheSource(dict): ...\nthe_cache = {}\nTheCacheSource = mk_cached_store(TheSource, the_cache)\n\nthe_source = TheSource({'green': 'eggs', 'and': 'ham'})\n\nthe_cached_source = TheCacheSource(the_source)\nprint(f\"the_cache: {the_cache}\")\nprint(f\"Getting green...\")\nthe_cached_source['green']\nprint(f\"the_cache: {the_cache}\")\nprint(\"... so the next time the_cached_source will get it's green from that the_cache\")\n```\n\n the_cache: {}\n Getting green...\n the_cache: {'green': 'eggs'}\n ... so the next time the_cached_source will get it's green from that the_cache\n\n\nBut now, you'll notice a slight problem ahead. What exactly does our source store (or rather reader) looks like? In it's raw form it would take urls as it's keys, and the response of a request as it's value. That store wouldn't have an `__iter__` for sure (unless you're Google). But more to the point here, the `mk_cached_store` tool uses the same key for the source and the cache, and we can't just use the url as is, to be a local file path. \n\nThere's many ways we could solve this. One way is to add a key map layer on the cache store, so externally, it speaks the url key language, but internally it will map that url to a valid local file path. We've been there, we got the T-shirt!\n\nBut what we're going to do is a bit different: We're going to do the key mapping in the source store itself. It seems to make more sense in our context: We have a data source of `name: data` pairs, and if we impose that the name should be a valid file name, we don't need to have a key map in the cache store.\n\nSo let's start by building this `MyDataStore` store. We'll start by defining the functions that get us the data we want. \n\n\n```python\ndef country_flag_image_url():\n import pandas as pd\n return pd.read_csv(\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv')\n\ndef kaggle_coronavirus_dataset():\n import kaggle\n from io import BytesIO\n # didn't find the pure binary download function, so using temp dir to emulate\n from tempfile import mkdtemp \n download_dir = mkdtemp()\n filename = 'novel-corona-virus-2019-dataset.zip'\n zip_file = os.path.join(download_dir, filename)\n \n dataset = 'sudalairajkumar/novel-corona-virus-2019-dataset'\n kaggle.api.dataset_download_files(dataset, download_dir)\n with open(zip_file, 'rb') as fp:\n b = fp.read()\n return BytesIO(b)\n\ndef city_population_in_time():\n import pandas as pd\n return pd.read_csv(\n 'https://gist.githubusercontent.com/johnburnmurdoch/'\n '4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'\n )\n```\n\nNow we can make a store that simply uses these function names as the keys, and their returned value as the values.\n\n\n```python\nfrom py2store.base import KvReader\nfrom functools import lru_cache\n\nclass FuncReader(KvReader):\n _getitem_cache_size = 999\n def __init__(self, funcs):\n # TODO: assert no free arguments (arguments are allowed but must all have defaults)\n self.funcs = funcs\n self._func_of_name = {func.__name__: func for func in funcs}\n\n def __contains__(self, k):\n return k in self._func_of_name\n \n def __iter__(self):\n yield from self._func_of_name\n \n def __len__(self):\n return len(self._func_of_name)\n\n @lru_cache(maxsize=_getitem_cache_size)\n def __getitem__(self, k):\n return self._func_of_name[k]() # call the func\n \n def __hash__(self):\n return 1\n \n```\n\n\n```python\ndata_sources = FuncReader([country_flag_image_url, kaggle_coronavirus_dataset, city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['city_population_in_time']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
namegroupyearvaluesubGroupcity_idlastValuelatlon
0AgraIndia1575200.0IndiaAgra - India200.027.1833378.01667
1AgraIndia1576212.0IndiaAgra - India200.027.1833378.01667
2AgraIndia1577224.0IndiaAgra - India212.027.1833378.01667
3AgraIndia1578236.0IndiaAgra - India224.027.1833378.01667
4AgraIndia1579248.0IndiaAgra - India236.027.1833378.01667
..............................
6247VijayanagarIndia1561480.0IndiaVijayanagar - India480.015.3350076.46200
6248VijayanagarIndia1562480.0IndiaVijayanagar - India480.015.3350076.46200
6249VijayanagarIndia1563480.0IndiaVijayanagar - India480.015.3350076.46200
6250VijayanagarIndia1564480.0IndiaVijayanagar - India480.015.3350076.46200
6251VijayanagarIndia1565480.0IndiaVijayanagar - India480.015.3350076.46200
\n

6252 rows \u00d7 9 columns

\n
\n\n\n\nBut we wanted this all to be cached locally, right? So a few more lines to do that!\n\n\n```python\nfrom py2store.caching import mk_cached_store\nfrom py2store import QuickPickleStore\n \nmy_local_cache = os.path.expanduser('~/ddir/my_sources')\n\nCachedFuncReader = mk_cached_store(FuncReader, QuickPickleStore(my_local_cache))\n```\n\n\n```python\ndata_sources = CachedFuncReader([country_flag_image_url, kaggle_coronavirus_dataset, city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['city_population_in_time']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
namegroupyearvaluesubGroupcity_idlastValuelatlon
0AgraIndia1575200.0IndiaAgra - India200.027.1833378.01667
1AgraIndia1576212.0IndiaAgra - India200.027.1833378.01667
2AgraIndia1577224.0IndiaAgra - India212.027.1833378.01667
3AgraIndia1578236.0IndiaAgra - India224.027.1833378.01667
4AgraIndia1579248.0IndiaAgra - India236.027.1833378.01667
..............................
6247VijayanagarIndia1561480.0IndiaVijayanagar - India480.015.3350076.46200
6248VijayanagarIndia1562480.0IndiaVijayanagar - India480.015.3350076.46200
6249VijayanagarIndia1563480.0IndiaVijayanagar - India480.015.3350076.46200
6250VijayanagarIndia1564480.0IndiaVijayanagar - India480.015.3350076.46200
6251VijayanagarIndia1565480.0IndiaVijayanagar - India480.015.3350076.46200
\n

6252 rows \u00d7 9 columns

\n
\n\n\n\n\n```python\nz = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\nlist(z)\n```\n", "long_description_content_type": "text/markdown", "description_file": "README.md", "root_url": "https://github.com/thorwhalen", "description": "A medley of things that got coded because there was an itch to do so", "author": "thorwhalen", "license": "Apache Software License", "description-file": "README.md", "install_requires": [], "keywords": [ "documentation", "packaging", "publishing" ] }/usr/lib/python3.13/site-packages/setuptools/dist.py:476: SetuptoolsDeprecationWarning: Invalid dash-separated options !! ******************************************************************************** Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead. This deprecation is overdue, please update your project and remove deprecated calls to avoid build errors in the future. See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details. ******************************************************************************** !! opt = self.warn_dash_deprecation(opt, section) /usr/lib/python3.13/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'description_file' warnings.warn(msg) /usr/lib/python3.13/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'root_url' warnings.warn(msg) /usr/lib/python3.13/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'description-file' warnings.warn(msg) -------------------------------------------------------------------- running egg_info writing tapyoca.egg-info/PKG-INFO writing dependency_links to tapyoca.egg-info/dependency_links.txt writing top-level names to tapyoca.egg-info/top_level.txt reading manifest file 'tapyoca.egg-info/SOURCES.txt' adding license file 'LICENSE' writing manifest file 'tapyoca.egg-info/SOURCES.txt' Handling wheel from get_requires_for_build_wheel Requirement satisfied: wheel (installed: wheel 0.43.0) !!!! containing_folder_name=tapyoca-0.0.4 but setup name is tapyoca Setup params ------------------------------------------------------- { "name": "tapyoca", "version": "0.0.4", "url": "https://github.com/thorwhalen/tapyoca", "packages": [ "tapyoca", "tapyoca.agglutination", "tapyoca.covid", "tapyoca.darpa", "tapyoca.demonyms", "tapyoca.indexing_podcasts", "tapyoca.parquet_deformations", "tapyoca.phoneming" ], "include_package_data": true, "platforms": "any", "long_description": "# tapyoca\nA medley of small projects\n\n\n# parquet_deformations\n\nI'm calling these [Parquet deformations](https://www.theguardian.com/artanddesign/alexs-adventures-in-numberland/2014/sep/09/crazy-paving-the-twisted-world-of-parquet-deformations#:~:text=In%20the%201960s%20an%20American,the%20regularity%20of%20the%20tiling.) but purest would lynch me. \n\nReally, I just wanted to transform one word into another word, gradually, as I've seen in some of [Escher's](https://en.wikipedia.org/wiki/M._C._Escher) work, so I looked it up, and saw that it's called parquet deformations. The math looked enticing, but I had no time for that, so I did the first way I could think of: Mapping pixels to pixels (in some fashion -- but nearest neighbors is the method that yields nicest results, under the pixel-level restriction). \n\nOf course, this can be applied to any image (that will be transformed to B/W (not even gray -- I mean actual B/W), and there's several ways you can perform the parquet (I like the gif rendering). \n\nThe main function (exposed as a script) is `mk_deformation_image`. All you need is to specify two images (or words). If you want, of course, you can specify:\n- `n_steps`: Number of steps from start to end image\n- `save_to_file`: path to file to save too (if not given, will just return the image object)\n- `kind`: 'gif', 'horizontal_stack', or 'vertical_stack'\n- `coordinate_mapping_maker`: A function that will return the mapping between start and end. \nThis function should return a pair (`from_coord`, `to_coord`) of aligned matrices whose 2 columns are the the \n`(x, y)` coordinates, and the rows represent aligned positions that should be mapped. \n\n\n\n## Examples\n\n### Two words...\n\n\n```python\nfit_to_size = 400\nstart_im = image_of_text('sensor').rotate(90, expand=1)\nend_im = image_of_text('meaning').rotate(90, expand=1)\nstart_and_end_image(start_im, end_im)\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_5_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 15, kind='h').resize((500,200))\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_6_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im.transpose(4), end_im.transpose(4), 5, kind='v').resize((200,200))\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_7_0.png)\n\n\n\n\n```python\nf = 'sensor_meaning_knn.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nf = 'sensor_meaning_scan.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f, \n coordinate_mapping_maker='scan')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nf = 'sensor_meaning_random.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f, \n coordinate_mapping_maker='random')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n### From a list of words\n\n\n```python\nstart_words = ['sensor', 'vibration', 'tempature']\nend_words = ['sense', 'meaning', 'detection']\nstart_im, end_im = make_start_and_end_images_with_words(\n start_words, end_words, perm=True, repeat=2, size=150)\nstart_and_end_image(start_im, end_im).resize((600, 200))\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_12_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 5)\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_13_0.png)\n\n\n\n\n```python\nf = 'bunch_of_words.gif'\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n## From files\n\n\n```python\nstart_im = Image.open('sensor_strip_01.png')\nend_im = Image.open('sense_strip_01.png')\nstart_and_end_image(start_im.resize((200, 500)), end_im.resize((200, 500)))\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_16_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 7)\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_17_0.png)\n\n\n\n\n```python\nf = 'medley.gif'\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f, coordinate_mapping_maker='scan')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n## an image and some text\n\n\n```python\nstart_im = 'img/waveform_01.png' # will first look for a file, and if not consider as text\nend_im = 'makes sense'\n\nmk_gif_of_deformations(start_im, end_im, n_steps=20, \n save_to_file='image_and_text.gif')\ndisplay_gif('image_and_text.gif') \n```\n\n\n\n\n\n\n\n\n\n\n\n# demonys\n\n## What do we think about other peoples?\n\nThis project is meant to get an idea of what people think of people for different nations, as seen by what they ask google about them. \n\nHere I use python code to acquire, clean up, and analyze the data. \n\n### Demonym\n\nIf you're like me and enjoy the false and fleeting impression of superiority that comes when you know a word someone else doesn't. If you're like me and go to parties for the sole purpose of seeking victims to get a one-up on, here's a cool word to add to your arsenal:\n\n**demonym**: a noun used to denote the natives or inhabitants of a particular country, state, city, etc.\n_\"he struggled for the correct demonym for the people of Manchester\"_\n\n### Back-story of this analysis\n \nDuring a discussion (about traveling in Europe) someone said \"why are the swiss so miserable\". Now, I wouldn't say that the swiss were especially miserable (a couple of ex-girlfriends aside), but to be fair he was contrasting with Italians, so perhaps he has a point. I apologize if you are swiss, or one of the two ex-girlfriends -- nothing personal, this is all for effect. \n\nWe googled \"why are the swiss so \", and sure enough, \"why are the swiss so miserable\" came up as one of the suggestions. So we got curious and started googling other peoples: the French, the Germans, etc.\n\nThat's the back-story of this analysis. This analysis is meant to get an idea of what we think of peoples from other countries. Of course, one can rightfully critique the approach I'll take to gauge \"what we think\" -- all three of these words should, but will not, be defined. I'm just going to see what google's *current* auto-suggest comes back with when I enter \"why are the X so \" (where X will be a noun that denotes the natives of inhabitants of a particular country; a *demonym* if you will). \n\n### Warning\n\nAgain, word of warning: All data and analyses are biased. \nTake everything you'll read here (and to be fair, what you read anywhere) with a grain of salt. \nFor simplicitly I'll saying things like \"what we think of...\" or \"who do we most...\", etc.\nBut I don't **really** mean that.\n\n### Resources\n\n* http://www.geography-site.co.uk/pages/countries/demonyms.html for my list of demonyms.\n* google for my suggestion engine, using the url prefix: `http://suggestqueries.google.com/complete/search?client=chrome&q=`\n\n\n## The results\n\n### In a nutshell\n\nBelow is listed 73 demonyms along with words extracted from the very first google suggestion when you type. \n\n`why are the DEMONYM so `\n\n```text\nafghan \t eyes beautiful\nalbanian \t beautiful\namerican \t girl dolls expensive\naustralian\t tall\nbelgian \t fries good\nbhutanese \t happy\nbrazilian \t good at football\nbritish \t full of grief and despair\nbulgarian \t properties cheap\nburmese \t cats affectionate\ncambodian \t cows skinny\ncanadian \t nice\nchinese \t healthy\ncolombian \t avocados big\ncuban \t cigars good\nczech \t tall\ndominican \t republic and haiti different\negyptian \t gods important\nenglish \t reserved\neritrean \t beautiful\nethiopian \t beautiful\nfilipino \t proud\nfinn \t shoes expensive\nfrench \t healthy\ngerman \t tall\ngreek \t gods messed up\nhaitian \t parents strict\nhungarian \t words long\nindian \t tv debates chaotic\nindonesian\t smart\niranian \t beautiful\nisraeli \t startups successful\nitalian \t short\njamaican \t sprinters fast\njapanese \t polite\nkenyan \t runners good\nlebanese \t rich\nmalagasy \t names long\nmalaysian \t drivers bad\nmaltese \t rude\nmongolian \t horses small\nmoroccan \t rugs expensive\nnepalese \t beautiful\nnigerian \t tall\nnorth korean\t hats big\nnorwegian \t flights cheap\npakistani \t fair\nperuvian \t blueberries big\npole \t vaulters hot\nportuguese\t short\npuerto rican\t and cuban flags similar\nromanian \t beautiful\nrussian \t good at math\nsamoan \t big\nsaudi \t arrogant\nscottish \t bitter\nsenegalese\t tall\nserbian \t tall\nsingaporean\t rude\nsomali \t parents strict\nsouth african\t plugs big\nsouth korean\t tall\nsri lankan\t dark\nsudanese \t tall\nswiss \t good at making watches\nsyrian \t families large\ntaiwanese \t pretty\nthai \t pretty\ntongan \t big\nukrainian \t beautiful\nvietnamese\t fiercely nationalistic\nwelsh \t dark\nzambian \t emeralds cheap\n```\n\n\nNotes:\n* The queries actually have a space after the \"so\", which matters so as to omit suggestions containing words that start with so.\n* Only the tail of the suggestion is shown -- minus prefix (`why are the DEMONYM` or `why are DEMONYM`) as well as the `so`, where ever it lands in the suggestion. \nFor example, the first suggestion for the american demonym was \"why are american dolls so expensive\", which results in the \"dolls expensive\" association. \n\n\n### Who do we most talk/ask about?\n\nThe original list contained 217 demonyms, but many of these yielded no suggestions (to the specific query format I used, that is). \nOnly 73 demonyms gave me at least one suggestion. \nBut within those, number of suggestions range between 1 and 20 (which is probably the default maximum number of suggestions for the API I used). \nSo, pretending that the number of suggestions is an indicator of how much we have to say, or how many different opinions we have, of each of the covered nationalities, \nhere's the top 15 demonyms people talk about, with the corresponding number of suggestions \n(proxy for \"the number of different things people ask about the said nationality). \n\n```text\nfrench 20\nsingaporean 20\ngerman 20\nbritish 20\nswiss 20\nenglish 19\nitalian 18\ncuban 18\ncanadian 18\nwelsh 18\naustralian 17\nmaltese 16\namerican 16\njapanese 14\nscottish 14\n```\n\n### Who do we least talk/ask about?\n\nConversely, here are the 19 demonyms that came back with only one suggestion.\n\n```text\nsomali 1\nbhutanese 1\nsyrian 1\ntongan 1\ncambodian 1\nmalagasy 1\nsaudi 1\nserbian 1\nczech 1\neritrean 1\nfinn 1\npuerto rican 1\npole 1\nhaitian 1\nhungarian 1\nperuvian 1\nmoroccan 1\nmongolian 1\nzambian 1\n```\n\n### What do we think about people?\n\nWhy are the French so...\n\nHow would you (if you're (un)lucky enough to know the French) finish this sentence?\nYou might even have several opinions about the French, and any other group of people you've rubbed shoulders with.\nWhat words would your palette contain to describe different nationalities?\nWhat words would others (at least those that ask questions to google) use?\n\nWell, here's what my auto-suggest search gave me. A set of 357 unique words and expressions to describe the 72 nationalities. \nSo a long tail of words use only for one nationality. But some words occur for more than one nationality. \nHere are the top 12 words/expressions used to describe people of the world. \n\n```text\nbeautiful 11\ntall 11\nshort 9\nnames long 8\nproud 8\nparents strict 8\nsmart 8\nnice 7\nboring 6\nrich 5\ndark 5\nsuccessful 5\n```\n\n### Who is beautiful? Who is tall? Who is short? Who is smart?\n\n```text\nbeautiful : albanian, eritrean, ethiopian, filipino, iranian, lebanese, nepalese, pakistani, romanian, ukrainian, vietnamese\ntall : australian, czech, german, nigerian, pakistani, samoan, senegalese, serbian, south korean, sudanese, taiwanese\nshort : filipino, indonesian, italian, maltese, nepalese, pakistani, portuguese, singaporean, welsh\nnames long : indian, malagasy, nigerian, portuguese, russian, sri lankan, thai, welsh\nproud : albanian, ethiopian, filipino, iranian, lebanese, portuguese, scottish, welsh\nparents strict : albanian, ethiopian, haitian, indian, lebanese, pakistani, somali, sri lankan\nsmart : indonesian, iranian, lebanese, pakistani, romanian, singaporean, taiwanese, vietnamese\nnice : canadian, english, filipino, nepalese, portuguese, taiwanese, thai\nboring : british, english, french, german, singaporean, swiss\nrich : lebanese, pakistani, singaporean, taiwanese, vietnamese\ndark : filipino, senegalese, sri lankan, vietnamese, welsh\nsuccessful : chinese, english, japanese, lebanese, swiss\n```\n\n## How did I do it?\n\nI scraped a list of (country, demonym) pairs from a table in http://www.geography-site.co.uk/pages/countries/demonyms.html.\n\nThen I diagnosed these and manually made a mapping to simplify some \"complex\" entries, \nsuch as mapping an entry such as \"Irishman or Irishwoman or Irish\" to \"Irish\".\n\nUsing the google suggest API (http://suggestqueries.google.com/complete/search?client=chrome&q=), I requested what the suggestions \nfor `why are the $demonym so ` query pattern, for `$demonym` running through all 217 demonyms from the list above, \nstoring the results for each if the results were non-empty. \n\nThen, it was just a matter of pulling this data into memory, formatting it a bit, and creating a pandas dataframe that I could then interrogate.\n \n## Resources you can find here\n\nThe code to do this analysis yourself, from scratch here: `data_acquisition.py`.\n\nThe jupyter notebook I actually used when I developed this: `01 - Demonyms and adjectives - why are the french so....ipynb`\n \nNote you'll need to pip install py2store if you haven't already.\n\nIn the `data` folder you'll find\n* country_demonym.p: A pickle of a dataframe of countries and corresponding demonyms\n* country_demonym.xlsx: The same as above, but in excel form\n* demonym_suggested_characteristics.p: A pickle of 73 demonyms and auto-suggestion information, including characteristics. \n* what_we_think_about_demonyns.xlsx: An excel containing various statistics about demonyms and their (perceived) characteristics\n \n\n\n\n\n\n# Agglutinations\n\nInspired from a [tweet](https://twitter.com/raymondh/status/1311003482531401729) from Raymond Hettinger this morning:\n\n_Resist the urge to elide the underscore in multiword function or method names_\n\nSo I wondered...\n\n## Gluglus\n\nThe gluglu of a word is the number of partitions you can make of that word into words (of length at least 2 (so no using a or i)).\n(No \"gluglu\" isn't an actual term -- unless everyone starts using it from now on. \nBut it was inspired from an actual [linguistic term](https://en.wikipedia.org/wiki/Agglutination).)\n\nFor example, the gluglu of ``newspaper`` is 4:\n\n```\nnewspaper\n new spa per\n news pa per\n news paper\n```\n\nEvery (valid) word has gluglu at least 1.\n\n\n## How many standard library names have gluglus at last 2?\n\n108\n\nHere's [the list](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/standard_lib_gluglus.txt) of all of them.\n\nThe winner has a gluglu of 6 (not 7 because formatannotationrelativeto isn't in the dictionary)\n\n```\nformatannotationrelativeto\n\tfor mat an not at ion relative to\n\tfor mat annotation relative to\n\tform at an not at ion relative to\n\tform at annotation relative to\n\tformat an not at ion relative to\n\tformat annotation relative to\n```\n\n## Details\n\n### Dictionary\n\nReally it depends on what dictionary we use. \nHere, I used a very conservative one. \nThe intersection of two lists: The [corncob](http://www.mieliestronk.com/corncob_lowercase.txt) \nand the [google10000](https://raw.githubusercontent.com/first20hours/google-10000-english/master/google-10000-english-usa.txt) word lists.\nAdditionally, I only kept of those, those that had at least 2 letters, and had only letters (no hyphens or disturbing diacritics).\n\nDiacritics. Look it up. Impress your next nerd date.\n\nIm left with 8116 words. You can find them [here](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/words_8116.csv).\n\n### Standard Lib Names\n\nSurprisingly, that was the hardest part. I know I'm missing some, but that's enough rabbit-holing. \n\nWhat I did (modulo some exceptions I won't look into) was to walk the standard lib modules (even that list wasn't a given!) \nextracting (recursively( the names of any (non-underscored) attributes if they were modules or callables, \nas well as extracting the arguments of these callables (when they had signatures).\n\nYou can find the code I used to extract these names [here](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/py_names.py) \nand the actual list [there](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/standard_lib_module_names.csv).\n\n\n\n# covid\n\n## Bar Chart Races (applied to covid-19 spread)\n\nThe module will show is how to make these:\n- Confirmed cases (by country): https://public.flourish.studio/visualisation/1704821/\n- Deaths (by country): https://public.flourish.studio/visualisation/1705644/\n- US Confirmed cases (by state): https://public.flourish.studio/visualisation/1794768/\n- US Deaths (by state): https://public.flourish.studio/visualisation/1794797/\n\n### The script\n\nIf you just want to run this as a script to get the job done, you have one here: \nhttps://raw.githubusercontent.com/thorwhalen/tapyoca/master/covid/covid_bar_chart_race.py\n\nRun like this\n```\n$ python covid_bar_chart_race.py -h\nusage: covid_bar_chart_race.py [-h] {mk-and-save-covid-data,update-covid-data,instructions-to-make-bar-chart-race} ...\n\npositional arguments:\n {mk-and-save-covid-data,update-covid-data,instructions-to-make-bar-chart-race}\n mk-and-save-covid-data\n :param data_sources: Dirpath or py2store Store where the data is :param kinds: The kinds of data you want to compute and save :param\n skip_first_days: :param verbose: :return:\n update-covid-data update the coronavirus data\n instructions-to-make-bar-chart-race\n\noptional arguments:\n -h, --help show this help message and exit\n ```\n \n \n### The jupyter notebook\n\nThe notebook (the .ipynb file) shows you how to do it step by step in case you want to reuse the methods for other stuff.\n\n\n\n## Getting and preparing the data\n\nCorona virus data here: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset (direct download: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset/download). It's currently updated daily, so download a fresh copy if you want.\n\nPopulation data here: http://api.worldbank.org/v2/en/indicator/SP.POP.TOTL?downloadformat=csv\n\nIt comes under the form of a zip file (currently named `novel-corona-virus-2019-dataset.zip` with several `.csv` files in them. We use `py2store` (To install: `pip install py2store`. Project lives here: https://github.com/i2mint/py2store) to access and pre-prepare it. It allows us to not have to unzip the file and replace the older folder with it every time we download a new one. It also gives us the csvs as `pandas.DataFrame` already. \n\n\n```python\nimport pandas as pd\nfrom io import BytesIO\nfrom py2store import kv_wrap, ZipReader # google it and pip install it\nfrom py2store.caching import mk_cached_store\nfrom py2store import QuickPickleStore\nfrom py2store.sources import FuncReader\n\ndef country_flag_image_url():\n import pandas as pd\n return pd.read_csv(\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv')\n\ndef kaggle_coronavirus_dataset():\n import kaggle\n from io import BytesIO\n # didn't find the pure binary download function, so using temp dir to emulate\n from tempfile import mkdtemp \n download_dir = mkdtemp()\n filename = 'novel-corona-virus-2019-dataset.zip'\n zip_file = os.path.join(download_dir, filename)\n \n dataset = 'sudalairajkumar/novel-corona-virus-2019-dataset'\n kaggle.api.dataset_download_files(dataset, download_dir)\n with open(zip_file, 'rb') as fp:\n b = fp.read()\n return BytesIO(b)\n\ndef city_population_in_time():\n import pandas as pd\n return pd.read_csv(\n 'https://gist.githubusercontent.com/johnburnmurdoch/'\n '4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'\n )\n\ndef country_flag_image_url_prep(df: pd.DataFrame):\n # delete the region col (we don't need it)\n del df['region']\n # rewriting a few (not all) of the country names to match those found in kaggle covid data\n # Note: The list is not complete! Add to it as needed\n old_and_new = [('USA', 'US'), \n ('Iran, Islamic Rep.', 'Iran'), \n ('UK', 'United Kingdom'), \n ('Korea, Rep.', 'Korea, South')]\n for old, new in old_and_new:\n df['country'] = df['country'].replace(old, new)\n\n return df\n\n\n@kv_wrap.outcoming_vals(lambda x: pd.read_csv(BytesIO(x))) # this is to format the data as a dataframe\nclass ZippedCsvs(ZipReader):\n pass\n# equivalent to ZippedCsvs = kv_wrap.outcoming_vals(lambda x: pd.read_csv(BytesIO(x)))(ZipReader)\n```\n\n\n```python\n# Enter here the place you want to cache your data\nmy_local_cache = os.path.expanduser('~/ddir/my_sources')\n```\n\n\n```python\nCachedFuncReader = mk_cached_store(FuncReader, QuickPickleStore(my_local_cache))\n```\n\n\n```python\ndata_sources = CachedFuncReader([country_flag_image_url, \n kaggle_coronavirus_dataset, \n city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ncovid_datasets = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\nlist(covid_datasets)\n```\n\n\n\n\n ['COVID19_line_list_data.csv',\n 'COVID19_open_line_list.csv',\n 'covid_19_data.csv',\n 'time_series_covid_19_confirmed.csv',\n 'time_series_covid_19_confirmed_US.csv',\n 'time_series_covid_19_deaths.csv',\n 'time_series_covid_19_deaths_US.csv',\n 'time_series_covid_19_recovered.csv']\n\n\n\n\n```python\ncovid_datasets['time_series_covid_19_confirmed.csv'].head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Province/StateCountry/RegionLatLong1/22/201/23/201/24/201/25/201/26/201/27/20...3/24/203/25/203/26/203/27/203/28/203/29/203/30/203/31/204/1/204/2/20
0NaNAfghanistan33.000065.0000000000...748494110110120170174237273
1NaNAlbania41.153320.1683000000...123146174186197212223243259277
2NaNAlgeria28.03391.6596000000...264302367409454511584716847986
3NaNAndorra42.50631.5218000000...164188224267308334370376390428
4NaNAngola-11.202717.8739000000...3344577788
\n

5 rows \u00d7 76 columns

\n
\n\n\n\n\n```python\ncountry_flag_image_url = data_sources['country_flag_image_url']\ncountry_flag_image_url.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
\n
\n\n\n\n\n```python\nfrom IPython.display import Image\nflag_image_url_of_country = country_flag_image_url.set_index('country')['flag_image_url']\nImage(url=flag_image_url_of_country['Australia'])\n```\n\n\n\n\n\n\n\n\n### Update coronavirus data\n\n\n```python\n# To update the coronavirus data:\ndef update_covid_data(data_sources):\n \"\"\"update the coronavirus data\"\"\"\n if 'kaggle_coronavirus_dataset' in data_sources._caching_store:\n del data_sources._caching_store['kaggle_coronavirus_dataset'] # delete the cached item\n _ = data_sources['kaggle_coronavirus_dataset']\n\n# update_covid_data(data_sources) # uncomment here when you want to update\n```\n\n### Prepare data for flourish upload\n\n\n```python\nimport re\n\ndef print_if_verbose(verbose, *args, **kwargs):\n if verbose:\n print(*args, **kwargs)\n \ndef country_data_for_data_kind(data_sources, kind='confirmed', skip_first_days=0, verbose=False):\n \"\"\"kind can be 'confirmed', 'deaths', 'confirmed_US', 'confirmed_US', 'recovered'\"\"\"\n \n covid_datasets = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\n \n df = covid_datasets[f'time_series_covid_19_{kind}.csv']\n # df = s['time_series_covid_19_deaths.csv']\n if 'Province/State' in df.columns:\n df.loc[df['Province/State'].isna(), 'Province/State'] = 'n/a' # to avoid problems arising from NaNs\n\n print_if_verbose(verbose, f\"Before data shape: {df.shape}\")\n\n # drop some columns we don't need\n p = re.compile('\\d+/\\d+/\\d+')\n\n assert all(isinstance(x, str) for x in df.columns)\n date_cols = [x for x in df.columns if p.match(x)]\n if not kind.endswith('US'):\n df = df.loc[:, ['Country/Region'] + date_cols]\n # group countries and sum up the contributions of their states/regions/pargs\n df['country'] = df.pop('Country/Region')\n df = df.groupby('country').sum()\n else:\n df = df.loc[:, ['Province_State'] + date_cols]\n df['state'] = df.pop('Province_State')\n df = df.groupby('state').sum()\n\n \n print_if_verbose(verbose, f\"After data shape: {df.shape}\")\n df = df.iloc[:, skip_first_days:]\n \n if not kind.endswith('US'):\n # Joining with the country image urls and saving as an xls\n country_image_url = country_flag_image_url_prep(data_sources['country_flag_image_url'])\n t = df.copy()\n t.columns = [str(x)[:10] for x in t.columns]\n t = t.reset_index(drop=False)\n t = country_image_url.merge(t, how='outer')\n t = t.set_index('country')\n df = t\n else: \n pass\n\n return df\n\n\ndef mk_and_save_country_data_for_data_kind(data_sources, kind='confirmed', skip_first_days=0, verbose=False):\n t = country_data_for_data_kind(data_sources, kind, skip_first_days, verbose)\n filepath = f'country_covid_{kind}.xlsx'\n t.to_excel(filepath)\n print_if_verbose(verbose, f\"Was saved here: {filepath}\")\n\n```\n\n\n```python\n# for kind in ['confirmed', 'deaths', 'recovered', 'confirmed_US', 'deaths_US']:\nfor kind in ['confirmed', 'deaths', 'recovered', 'confirmed_US', 'deaths_US']:\n mk_and_save_country_data_for_data_kind(data_sources, kind=kind, skip_first_days=39, verbose=True)\n```\n\n Before data shape: (262, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_confirmed.xlsx\n Before data shape: (262, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_deaths.xlsx\n Before data shape: (248, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_recovered.xlsx\n Before data shape: (3253, 86)\n After data shape: (58, 75)\n Was saved here: country_covid_confirmed_US.xlsx\n Before data shape: (3253, 87)\n After data shape: (58, 75)\n Was saved here: country_covid_deaths_US.xlsx\n\n\n### Upload to Flourish, tune, and publish\n\nGo to https://public.flourish.studio/, get a free account, and play.\n\nGot to https://app.flourish.studio/templates\n\nChoose \"Bar chart race\". At the time of writing this, it was here: https://app.flourish.studio/visualisation/1706060/\n\n... and then play with the settings\n\n\n## Discussion of the methods\n\n\n```python\nfrom py2store import *\nfrom IPython.display import Image\n```\n\n### country flags images\n\nThe manual data prep looks something like this.\n\n\n```python\nimport pandas as pd\n\n# get the csv data from the url\ncountry_image_url_source = \\\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv'\ncountry_image_url = pd.read_csv(country_image_url_source)\n\n# delete the region col (we don't need it)\ndel country_image_url['region']\n\n# rewriting a few (not all) of the country names to match those found in kaggle covid data\n# Note: The list is not complete! Add to it as needed\n# TODO: (Wishful) Using a general smart soft-matching algorithm to do this automatically.\n# TODO: This could use edit-distance, synonyms, acronym generation, etc.\nold_and_new = [('USA', 'US'), \n ('Iran, Islamic Rep.', 'Iran'), \n ('UK', 'United Kingdom'), \n ('Korea, Rep.', 'Korea, South')]\nfor old, new in old_and_new:\n country_image_url['country'] = country_image_url['country'].replace(old, new)\n\nimage_url_of_country = country_image_url.set_index('country')['flag_image_url']\n\ncountry_image_url.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryflag_image_url
0Angolahttps://www.countryflags.io/ao/flat/64.png
1Burundihttps://www.countryflags.io/bi/flat/64.png
2Beninhttps://www.countryflags.io/bj/flat/64.png
3Burkina Fasohttps://www.countryflags.io/bf/flat/64.png
4Botswanahttps://www.countryflags.io/bw/flat/64.png
\n
\n\n\n\n\n```python\nImage(url=image_url_of_country['Australia'])\n```\n\n\n\n\n\n\n\n\n### Caching the flag images data\n\nDownloading our data sources every time we need them is not sustainable. What if they're big? What if you're offline or have slow internet (yes, dear future reader, even in the US, during coronavirus times!)?\n\nCaching. A \"cache aside\" read-cache. That's the word. py2store has tools for that (most of which are are caching.py). \n\nSo let's say we're going to have a local folder where we'll store various datas we download. The principle is as follows:\n\n\n```python\nfrom py2store.caching import mk_cached_store\n\nclass TheSource(dict): ...\nthe_cache = {}\nTheCacheSource = mk_cached_store(TheSource, the_cache)\n\nthe_source = TheSource({'green': 'eggs', 'and': 'ham'})\n\nthe_cached_source = TheCacheSource(the_source)\nprint(f\"the_cache: {the_cache}\")\nprint(f\"Getting green...\")\nthe_cached_source['green']\nprint(f\"the_cache: {the_cache}\")\nprint(\"... so the next time the_cached_source will get it's green from that the_cache\")\n```\n\n the_cache: {}\n Getting green...\n the_cache: {'green': 'eggs'}\n ... so the next time the_cached_source will get it's green from that the_cache\n\n\nBut now, you'll notice a slight problem ahead. What exactly does our source store (or rather reader) looks like? In it's raw form it would take urls as it's keys, and the response of a request as it's value. That store wouldn't have an `__iter__` for sure (unless you're Google). But more to the point here, the `mk_cached_store` tool uses the same key for the source and the cache, and we can't just use the url as is, to be a local file path. \n\nThere's many ways we could solve this. One way is to add a key map layer on the cache store, so externally, it speaks the url key language, but internally it will map that url to a valid local file path. We've been there, we got the T-shirt!\n\nBut what we're going to do is a bit different: We're going to do the key mapping in the source store itself. It seems to make more sense in our context: We have a data source of `name: data` pairs, and if we impose that the name should be a valid file name, we don't need to have a key map in the cache store.\n\nSo let's start by building this `MyDataStore` store. We'll start by defining the functions that get us the data we want. \n\n\n```python\ndef country_flag_image_url():\n import pandas as pd\n return pd.read_csv(\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv')\n\ndef kaggle_coronavirus_dataset():\n import kaggle\n from io import BytesIO\n # didn't find the pure binary download function, so using temp dir to emulate\n from tempfile import mkdtemp \n download_dir = mkdtemp()\n filename = 'novel-corona-virus-2019-dataset.zip'\n zip_file = os.path.join(download_dir, filename)\n \n dataset = 'sudalairajkumar/novel-corona-virus-2019-dataset'\n kaggle.api.dataset_download_files(dataset, download_dir)\n with open(zip_file, 'rb') as fp:\n b = fp.read()\n return BytesIO(b)\n\ndef city_population_in_time():\n import pandas as pd\n return pd.read_csv(\n 'https://gist.githubusercontent.com/johnburnmurdoch/'\n '4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'\n )\n```\n\nNow we can make a store that simply uses these function names as the keys, and their returned value as the values.\n\n\n```python\nfrom py2store.base import KvReader\nfrom functools import lru_cache\n\nclass FuncReader(KvReader):\n _getitem_cache_size = 999\n def __init__(self, funcs):\n # TODO: assert no free arguments (arguments are allowed but must all have defaults)\n self.funcs = funcs\n self._func_of_name = {func.__name__: func for func in funcs}\n\n def __contains__(self, k):\n return k in self._func_of_name\n \n def __iter__(self):\n yield from self._func_of_name\n \n def __len__(self):\n return len(self._func_of_name)\n\n @lru_cache(maxsize=_getitem_cache_size)\n def __getitem__(self, k):\n return self._func_of_name[k]() # call the func\n \n def __hash__(self):\n return 1\n \n```\n\n\n```python\ndata_sources = FuncReader([country_flag_image_url, kaggle_coronavirus_dataset, city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['city_population_in_time']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
namegroupyearvaluesubGroupcity_idlastValuelatlon
0AgraIndia1575200.0IndiaAgra - India200.027.1833378.01667
1AgraIndia1576212.0IndiaAgra - India200.027.1833378.01667
2AgraIndia1577224.0IndiaAgra - India212.027.1833378.01667
3AgraIndia1578236.0IndiaAgra - India224.027.1833378.01667
4AgraIndia1579248.0IndiaAgra - India236.027.1833378.01667
..............................
6247VijayanagarIndia1561480.0IndiaVijayanagar - India480.015.3350076.46200
6248VijayanagarIndia1562480.0IndiaVijayanagar - India480.015.3350076.46200
6249VijayanagarIndia1563480.0IndiaVijayanagar - India480.015.3350076.46200
6250VijayanagarIndia1564480.0IndiaVijayanagar - India480.015.3350076.46200
6251VijayanagarIndia1565480.0IndiaVijayanagar - India480.015.3350076.46200
\n

6252 rows \u00d7 9 columns

\n
\n\n\n\nBut we wanted this all to be cached locally, right? So a few more lines to do that!\n\n\n```python\nfrom py2store.caching import mk_cached_store\nfrom py2store import QuickPickleStore\n \nmy_local_cache = os.path.expanduser('~/ddir/my_sources')\n\nCachedFuncReader = mk_cached_store(FuncReader, QuickPickleStore(my_local_cache))\n```\n\n\n```python\ndata_sources = CachedFuncReader([country_flag_image_url, kaggle_coronavirus_dataset, city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['city_population_in_time']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
namegroupyearvaluesubGroupcity_idlastValuelatlon
0AgraIndia1575200.0IndiaAgra - India200.027.1833378.01667
1AgraIndia1576212.0IndiaAgra - India200.027.1833378.01667
2AgraIndia1577224.0IndiaAgra - India212.027.1833378.01667
3AgraIndia1578236.0IndiaAgra - India224.027.1833378.01667
4AgraIndia1579248.0IndiaAgra - India236.027.1833378.01667
..............................
6247VijayanagarIndia1561480.0IndiaVijayanagar - India480.015.3350076.46200
6248VijayanagarIndia1562480.0IndiaVijayanagar - India480.015.3350076.46200
6249VijayanagarIndia1563480.0IndiaVijayanagar - India480.015.3350076.46200
6250VijayanagarIndia1564480.0IndiaVijayanagar - India480.015.3350076.46200
6251VijayanagarIndia1565480.0IndiaVijayanagar - India480.015.3350076.46200
\n

6252 rows \u00d7 9 columns

\n
\n\n\n\n\n```python\nz = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\nlist(z)\n```\n", "long_description_content_type": "text/markdown", "description_file": "README.md", "root_url": "https://github.com/thorwhalen", "description": "A medley of things that got coded because there was an itch to do so", "author": "thorwhalen", "license": "Apache Software License", "description-file": "README.md", "install_requires": [], "keywords": [ "documentation", "packaging", "publishing" ] } -------------------------------------------------------------------- running dist_info writing tapyoca.egg-info/PKG-INFO writing dependency_links to tapyoca.egg-info/dependency_links.txt writing top-level names to tapyoca.egg-info/top_level.txt reading manifest file 'tapyoca.egg-info/SOURCES.txt' adding license file 'LICENSE' writing manifest file 'tapyoca.egg-info/SOURCES.txt' creating '/builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/tapyoca-0.0.4.dist-info' + cat /builddir/build/BUILD/python-tapyoca-0.0.4-build/python-tapyoca-0.0.4-1.fc41.x86_64-pyproject-buildrequires + rm -rfv tapyoca-0.0.4.dist-info/ removed 'tapyoca-0.0.4.dist-info/top_level.txt' removed 'tapyoca-0.0.4.dist-info/METADATA' removed 'tapyoca-0.0.4.dist-info/LICENSE' removed directory 'tapyoca-0.0.4.dist-info/' + RPM_EC=0 ++ jobs -p + exit 0 Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.2cPNXe + umask 022 + cd /builddir/build/BUILD/python-tapyoca-0.0.4-build + CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CFLAGS + CXXFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CXXFLAGS + FFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FFLAGS + FCFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FCFLAGS + VALAFLAGS=-g + export VALAFLAGS + RUSTFLAGS='-Copt-level=3 -Cdebuginfo=2 -Ccodegen-units=1 -Cstrip=none -Cforce-frame-pointers=yes -Clink-arg=-specs=/usr/lib/rpm/redhat/redhat-package-notes --cap-lints=warn' + export RUSTFLAGS + LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes ' + export LDFLAGS + LT_SYS_LIBRARY_PATH=/usr/lib64: + export LT_SYS_LIBRARY_PATH + CC=gcc + export CC + CXX=g++ + export CXX + cd tapyoca-0.0.4 + mkdir -p /builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/.pyproject-builddir + CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + CXXFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + FFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + FCFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + VALAFLAGS=-g + RUSTFLAGS='-Copt-level=3 -Cdebuginfo=2 -Ccodegen-units=1 -Cstrip=none -Cforce-frame-pointers=yes -Clink-arg=-specs=/usr/lib/rpm/redhat/redhat-package-notes --cap-lints=warn' + LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes ' + LT_SYS_LIBRARY_PATH=/usr/lib64: + CC=gcc + CXX=g++ + TMPDIR=/builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/.pyproject-builddir + /usr/bin/python3 -Bs /usr/lib/rpm/redhat/pyproject_wheel.py /builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/pyproject-wheeldir Processing /builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4 Preparing metadata (pyproject.toml): started Running command Preparing metadata (pyproject.toml) !!!! containing_folder_name=tapyoca-0.0.4 but setup name is tapyoca Setup params ------------------------------------------------------- { "name": "tapyoca", "version": "0.0.4", "url": "https://github.com/thorwhalen/tapyoca", "packages": [ "tapyoca", "tapyoca.agglutination", "tapyoca.covid", "tapyoca.darpa", "tapyoca.demonyms", "tapyoca.indexing_podcasts", "tapyoca.parquet_deformations", "tapyoca.phoneming" ], "include_package_data": true, "platforms": "any", "long_description": "# tapyoca\nA medley of small projects\n\n\n# parquet_deformations\n\nI'm calling these [Parquet deformations](https://www.theguardian.com/artanddesign/alexs-adventures-in-numberland/2014/sep/09/crazy-paving-the-twisted-world-of-parquet-deformations#:~:text=In%20the%201960s%20an%20American,the%20regularity%20of%20the%20tiling.) but purest would lynch me. \n\nReally, I just wanted to transform one word into another word, gradually, as I've seen in some of [Escher's](https://en.wikipedia.org/wiki/M._C._Escher) work, so I looked it up, and saw that it's called parquet deformations. The math looked enticing, but I had no time for that, so I did the first way I could think of: Mapping pixels to pixels (in some fashion -- but nearest neighbors is the method that yields nicest results, under the pixel-level restriction). \n\nOf course, this can be applied to any image (that will be transformed to B/W (not even gray -- I mean actual B/W), and there's several ways you can perform the parquet (I like the gif rendering). \n\nThe main function (exposed as a script) is `mk_deformation_image`. All you need is to specify two images (or words). If you want, of course, you can specify:\n- `n_steps`: Number of steps from start to end image\n- `save_to_file`: path to file to save too (if not given, will just return the image object)\n- `kind`: 'gif', 'horizontal_stack', or 'vertical_stack'\n- `coordinate_mapping_maker`: A function that will return the mapping between start and end. \nThis function should return a pair (`from_coord`, `to_coord`) of aligned matrices whose 2 columns are the the \n`(x, y)` coordinates, and the rows represent aligned positions that should be mapped. \n\n\n\n## Examples\n\n### Two words...\n\n\n```python\nfit_to_size = 400\nstart_im = image_of_text('sensor').rotate(90, expand=1)\nend_im = image_of_text('meaning').rotate(90, expand=1)\nstart_and_end_image(start_im, end_im)\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_5_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 15, kind='h').resize((500,200))\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_6_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im.transpose(4), end_im.transpose(4), 5, kind='v').resize((200,200))\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_7_0.png)\n\n\n\n\n```python\nf = 'sensor_meaning_knn.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nf = 'sensor_meaning_scan.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f, \n coordinate_mapping_maker='scan')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nf = 'sensor_meaning_random.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f, \n coordinate_mapping_maker='random')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n### From a list of words\n\n\n```python\nstart_words = ['sensor', 'vibration', 'tempature']\nend_words = ['sense', 'meaning', 'detection']\nstart_im, end_im = make_start_and_end_images_with_words(\n start_words, end_words, perm=True, repeat=2, size=150)\nstart_and_end_image(start_im, end_im).resize((600, 200))\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_12_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 5)\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_13_0.png)\n\n\n\n\n```python\nf = 'bunch_of_words.gif'\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n## From files\n\n\n```python\nstart_im = Image.open('sensor_strip_01.png')\nend_im = Image.open('sense_strip_01.png')\nstart_and_end_image(start_im.resize((200, 500)), end_im.resize((200, 500)))\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_16_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 7)\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_17_0.png)\n\n\n\n\n```python\nf = 'medley.gif'\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f, coordinate_mapping_maker='scan')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n## an image and some text\n\n\n```python\nstart_im = 'img/waveform_01.png' # will first look for a file, and if not consider as text\nend_im = 'makes sense'\n\nmk_gif_of_deformations(start_im, end_im, n_steps=20, \n save_to_file='image_and_text.gif')\ndisplay_gif('image_and_text.gif') \n```\n\n\n\n\n\n\n\n\n\n\n\n# demonys\n\n## What do we think about other peoples?\n\nThis project is meant to get an idea of what people think of people for different nations, as seen by what they ask google about them. \n\nHere I use python code to acquire, clean up, and analyze the data. \n\n### Demonym\n\nIf you're like me and enjoy the false and fleeting impression of superiority that comes when you know a word someone else doesn't. If you're like me and go to parties for the sole purpose of seeking victims to get a one-up on, here's a cool word to add to your arsenal:\n\n**demonym**: a noun used to denote the natives or inhabitants of a particular country, state, city, etc.\n_\"he struggled for the correct demonym for the people of Manchester\"_\n\n### Back-story of this analysis\n \nDuring a discussion (about traveling in Europe) someone said \"why are the swiss so miserable\". Now, I wouldn't say that the swiss were especially miserable (a couple of ex-girlfriends aside), but to be fair he was contrasting with Italians, so perhaps he has a point. I apologize if you are swiss, or one of the two ex-girlfriends -- nothing personal, this is all for effect. \n\nWe googled \"why are the swiss so \", and sure enough, \"why are the swiss so miserable\" came up as one of the suggestions. So we got curious and started googling other peoples: the French, the Germans, etc.\n\nThat's the back-story of this analysis. This analysis is meant to get an idea of what we think of peoples from other countries. Of course, one can rightfully critique the approach I'll take to gauge \"what we think\" -- all three of these words should, but will not, be defined. I'm just going to see what google's *current* auto-suggest comes back with when I enter \"why are the X so \" (where X will be a noun that denotes the natives of inhabitants of a particular country; a *demonym* if you will). \n\n### Warning\n\nAgain, word of warning: All data and analyses are biased. \nTake everything you'll read here (and to be fair, what you read anywhere) with a grain of salt. \nFor simplicitly I'll saying things like \"what we think of...\" or \"who do we most...\", etc.\nBut I don't **really** mean that.\n\n### Resources\n\n* http://www.geography-site.co.uk/pages/countries/demonyms.html for my list of demonyms.\n* google for my suggestion engine, using the url prefix: `http://suggestqueries.google.com/complete/search?client=chrome&q=`\n\n\n## The results\n\n### In a nutshell\n\nBelow is listed 73 demonyms along with words extracted from the very first google suggestion when you type. \n\n`why are the DEMONYM so `\n\n```text\nafghan \t eyes beautiful\nalbanian \t beautiful\namerican \t girl dolls expensive\naustralian\t tall\nbelgian \t fries good\nbhutanese \t happy\nbrazilian \t good at football\nbritish \t full of grief and despair\nbulgarian \t properties cheap\nburmese \t cats affectionate\ncambodian \t cows skinny\ncanadian \t nice\nchinese \t healthy\ncolombian \t avocados big\ncuban \t cigars good\nczech \t tall\ndominican \t republic and haiti different\negyptian \t gods important\nenglish \t reserved\neritrean \t beautiful\nethiopian \t beautiful\nfilipino \t proud\nfinn \t shoes expensive\nfrench \t healthy\ngerman \t tall\ngreek \t gods messed up\nhaitian \t parents strict\nhungarian \t words long\nindian \t tv debates chaotic\nindonesian\t smart\niranian \t beautiful\nisraeli \t startups successful\nitalian \t short\njamaican \t sprinters fast\njapanese \t polite\nkenyan \t runners good\nlebanese \t rich\nmalagasy \t names long\nmalaysian \t drivers bad\nmaltese \t rude\nmongolian \t horses small\nmoroccan \t rugs expensive\nnepalese \t beautiful\nnigerian \t tall\nnorth korean\t hats big\nnorwegian \t flights cheap\npakistani \t fair\nperuvian \t blueberries big\npole \t vaulters hot\nportuguese\t short\npuerto rican\t and cuban flags similar\nromanian \t beautiful\nrussian \t good at math\nsamoan \t big\nsaudi \t arrogant\nscottish \t bitter\nsenegalese\t tall\nserbian \t tall\nsingaporean\t rude\nsomali \t parents strict\nsouth african\t plugs big\nsouth korean\t tall\nsri lankan\t dark\nsudanese \t tall\nswiss \t good at making watches\nsyrian \t families large\ntaiwanese \t pretty\nthai \t pretty\ntongan \t big\nukrainian \t beautiful\nvietnamese\t fiercely nationalistic\nwelsh \t dark\nzambian \t emeralds cheap\n```\n\n\nNotes:\n* The queries actually have a space after the \"so\", which matters so as to omit suggestions containing words that start with so.\n* Only the tail of the suggestion is shown -- minus prefix (`why are the DEMONYM` or `why are DEMONYM`) as well as the `so`, where ever it lands in the suggestion. \nFor example, the first suggestion for the american demonym was \"why are american dolls so expensive\", which results in the \"dolls expensive\" association. \n\n\n### Who do we most talk/ask about?\n\nThe original list contained 217 demonyms, but many of these yielded no suggestions (to the specific query format I used, that is). \nOnly 73 demonyms gave me at least one suggestion. \nBut within those, number of suggestions range between 1 and 20 (which is probably the default maximum number of suggestions for the API I used). \nSo, pretending that the number of suggestions is an indicator of how much we have to say, or how many different opinions we have, of each of the covered nationalities, \nhere's the top 15 demonyms people talk about, with the corresponding number of suggestions \n(proxy for \"the number of different things people ask about the said nationality). \n\n```text\nfrench 20\nsingaporean 20\ngerman 20\nbritish 20\nswiss 20\nenglish 19\nitalian 18\ncuban 18\ncanadian 18\nwelsh 18\naustralian 17\nmaltese 16\namerican 16\njapanese 14\nscottish 14\n```\n\n### Who do we least talk/ask about?\n\nConversely, here are the 19 demonyms that came back with only one suggestion.\n\n```text\nsomali 1\nbhutanese 1\nsyrian 1\ntongan 1\ncambodian 1\nmalagasy 1\nsaudi 1\nserbian 1\nczech 1\neritrean 1\nfinn 1\npuerto rican 1\npole 1\nhaitian 1\nhungarian 1\nperuvian 1\nmoroccan 1\nmongolian 1\nzambian 1\n```\n\n### What do we think about people?\n\nWhy are the French so...\n\nHow would you (if you're (un)lucky enough to know the French) finish this sentence?\nYou might even have several opinions about the French, and any other group of people you've rubbed shoulders with.\nWhat words would your palette contain to describe different nationalities?\nWhat words would others (at least those that ask questions to google) use?\n\nWell, here's what my auto-suggest search gave me. A set of 357 unique words and expressions to describe the 72 nationalities. \nSo a long tail of words use only for one nationality. But some words occur for more than one nationality. \nHere are the top 12 words/expressions used to describe people of the world. \n\n```text\nbeautiful 11\ntall 11\nshort 9\nnames long 8\nproud 8\nparents strict 8\nsmart 8\nnice 7\nboring 6\nrich 5\ndark 5\nsuccessful 5\n```\n\n### Who is beautiful? Who is tall? Who is short? Who is smart?\n\n```text\nbeautiful : albanian, eritrean, ethiopian, filipino, iranian, lebanese, nepalese, pakistani, romanian, ukrainian, vietnamese\ntall : australian, czech, german, nigerian, pakistani, samoan, senegalese, serbian, south korean, sudanese, taiwanese\nshort : filipino, indonesian, italian, maltese, nepalese, pakistani, portuguese, singaporean, welsh\nnames long : indian, malagasy, nigerian, portuguese, russian, sri lankan, thai, welsh\nproud : albanian, ethiopian, filipino, iranian, lebanese, portuguese, scottish, welsh\nparents strict : albanian, ethiopian, haitian, indian, lebanese, pakistani, somali, sri lankan\nsmart : indonesian, iranian, lebanese, pakistani, romanian, singaporean, taiwanese, vietnamese\nnice : canadian, english, filipino, nepalese, portuguese, taiwanese, thai\nboring : british, english, french, german, singaporean, swiss\nrich : lebanese, pakistani, singaporean, taiwanese, vietnamese\ndark : filipino, senegalese, sri lankan, vietnamese, welsh\nsuccessful : chinese, english, japanese, lebanese, swiss\n```\n\n## How did I do it?\n\nI scraped a list of (country, demonym) pairs from a table in http://www.geography-site.co.uk/pages/countries/demonyms.html.\n\nThen I diagnosed these and manually made a mapping to simplify some \"complex\" entries, \nsuch as mapping an entry such as \"Irishman or Irishwoman or Irish\" to \"Irish\".\n\nUsing the google suggest API (http://suggestqueries.google.com/complete/search?client=chrome&q=), I requested what the suggestions \nfor `why are the $demonym so ` query pattern, for `$demonym` running through all 217 demonyms from the list above, \nstoring the results for each if the results were non-empty. \n\nThen, it was just a matter of pulling this data into memory, formatting it a bit, and creating a pandas dataframe that I could then interrogate.\n \n## Resources you can find here\n\nThe code to do this analysis yourself, from scratch here: `data_acquisition.py`.\n\nThe jupyter notebook I actually used when I developed this: `01 - Demonyms and adjectives - why are the french so....ipynb`\n \nNote you'll need to pip install py2store if you haven't already.\n\nIn the `data` folder you'll find\n* country_demonym.p: A pickle of a dataframe of countries and corresponding demonyms\n* country_demonym.xlsx: The same as above, but in excel form\n* demonym_suggested_characteristics.p: A pickle of 73 demonyms and auto-suggestion information, including characteristics. \n* what_we_think_about_demonyns.xlsx: An excel containing various statistics about demonyms and their (perceived) characteristics\n \n\n\n\n\n\n# Agglutinations\n\nInspired from a [tweet](https://twitter.com/raymondh/status/1311003482531401729) from Raymond Hettinger this morning:\n\n_Resist the urge to elide the underscore in multiword function or method names_\n\nSo I wondered...\n\n## Gluglus\n\nThe gluglu of a word is the number of partitions you can make of that word into words (of length at least 2 (so no using a or i)).\n(No \"gluglu\" isn't an actual term -- unless everyone starts using it from now on. \nBut it was inspired from an actual [linguistic term](https://en.wikipedia.org/wiki/Agglutination).)\n\nFor example, the gluglu of ``newspaper`` is 4:\n\n```\nnewspaper\n new spa per\n news pa per\n news paper\n```\n\nEvery (valid) word has gluglu at least 1.\n\n\n## How many standard library names have gluglus at last 2?\n\n108\n\nHere's [the list](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/standard_lib_gluglus.txt) of all of them.\n\nThe winner has a gluglu of 6 (not 7 because formatannotationrelativeto isn't in the dictionary)\n\n```\nformatannotationrelativeto\n\tfor mat an not at ion relative to\n\tfor mat annotation relative to\n\tform at an not at ion relative to\n\tform at annotation relative to\n\tformat an not at ion relative to\n\tformat annotation relative to\n```\n\n## Details\n\n### Dictionary\n\nReally it depends on what dictionary we use. \nHere, I used a very conservative one. \nThe intersection of two lists: The [corncob](http://www.mieliestronk.com/corncob_lowercase.txt) \nand the [google10000](https://raw.githubusercontent.com/first20hours/google-10000-english/master/google-10000-english-usa.txt) word lists.\nAdditionally, I only kept of those, those that had at least 2 letters, and had only letters (no hyphens or disturbing diacritics).\n\nDiacritics. Look it up. Impress your next nerd date.\n\nIm left with 8116 words. You can find them [here](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/words_8116.csv).\n\n### Standard Lib Names\n\nSurprisingly, that was the hardest part. I know I'm missing some, but that's enough rabbit-holing. \n\nWhat I did (modulo some exceptions I won't look into) was to walk the standard lib modules (even that list wasn't a given!) \nextracting (recursively( the names of any (non-underscored) attributes if they were modules or callables, \nas well as extracting the arguments of these callables (when they had signatures).\n\nYou can find the code I used to extract these names [here](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/py_names.py) \nand the actual list [there](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/standard_lib_module_names.csv).\n\n\n\n# covid\n\n## Bar Chart Races (applied to covid-19 spread)\n\nThe module will show is how to make these:\n- Confirmed cases (by country): https://public.flourish.studio/visualisation/1704821/\n- Deaths (by country): https://public.flourish.studio/visualisation/1705644/\n- US Confirmed cases (by state): https://public.flourish.studio/visualisation/1794768/\n- US Deaths (by state): https://public.flourish.studio/visualisation/1794797/\n\n### The script\n\nIf you just want to run this as a script to get the job done, you have one here: \nhttps://raw.githubusercontent.com/thorwhalen/tapyoca/master/covid/covid_bar_chart_race.py\n\nRun like this\n```\n$ python covid_bar_chart_race.py -h\nusage: covid_bar_chart_race.py [-h] {mk-and-save-covid-data,update-covid-data,instructions-to-make-bar-chart-race} ...\n\npositional arguments:\n {mk-and-save-covid-data,update-covid-data,instructions-to-make-bar-chart-race}\n mk-and-save-covid-data\n :param data_sources: Dirpath or py2store Store where the data is :param kinds: The kinds of data you want to compute and save :param\n skip_first_days: :param verbose: :return:\n update-covid-data update the coronavirus data\n instructions-to-make-bar-chart-race\n\noptional arguments:\n -h, --help show this help message and exit\n ```\n \n \n### The jupyter notebook\n\nThe notebook (the .ipynb file) shows you how to do it step by step in case you want to reuse the methods for other stuff.\n\n\n\n## Getting and preparing the data\n\nCorona virus data here: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset (direct download: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset/download). It's currently updated daily, so download a fresh copy if you want.\n\nPopulation data here: http://api.worldbank.org/v2/en/indicator/SP.POP.TOTL?downloadformat=csv\n\nIt comes under the form of a zip file (currently named `novel-corona-virus-2019-dataset.zip` with several `.csv` files in them. We use `py2store` (To install: `pip install py2store`. Project lives here: https://github.com/i2mint/py2store) to access and pre-prepare it. It allows us to not have to unzip the file and replace the older folder with it every time we download a new one. It also gives us the csvs as `pandas.DataFrame` already. \n\n\n```python\nimport pandas as pd\nfrom io import BytesIO\nfrom py2store import kv_wrap, ZipReader # google it and pip install it\nfrom py2store.caching import mk_cached_store\nfrom py2store import QuickPickleStore\nfrom py2store.sources import FuncReader\n\ndef country_flag_image_url():\n import pandas as pd\n return pd.read_csv(\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv')\n\ndef kaggle_coronavirus_dataset():\n import kaggle\n from io import BytesIO\n # didn't find the pure binary download function, so using temp dir to emulate\n from tempfile import mkdtemp \n download_dir = mkdtemp()\n filename = 'novel-corona-virus-2019-dataset.zip'\n zip_file = os.path.join(download_dir, filename)\n \n dataset = 'sudalairajkumar/novel-corona-virus-2019-dataset'\n kaggle.api.dataset_download_files(dataset, download_dir)\n with open(zip_file, 'rb') as fp:\n b = fp.read()\n return BytesIO(b)\n\ndef city_population_in_time():\n import pandas as pd\n return pd.read_csv(\n 'https://gist.githubusercontent.com/johnburnmurdoch/'\n '4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'\n )\n\ndef country_flag_image_url_prep(df: pd.DataFrame):\n # delete the region col (we don't need it)\n del df['region']\n # rewriting a few (not all) of the country names to match those found in kaggle covid data\n # Note: The list is not complete! Add to it as needed\n old_and_new = [('USA', 'US'), \n ('Iran, Islamic Rep.', 'Iran'), \n ('UK', 'United Kingdom'), \n ('Korea, Rep.', 'Korea, South')]\n for old, new in old_and_new:\n df['country'] = df['country'].replace(old, new)\n\n return df\n\n\n@kv_wrap.outcoming_vals(lambda x: pd.read_csv(BytesIO(x))) # this is to format the data as a dataframe\nclass ZippedCsvs(ZipReader):\n pass\n# equivalent to ZippedCsvs = kv_wrap.outcoming_vals(lambda x: pd.read_csv(BytesIO(x)))(ZipReader)\n```\n\n\n```python\n# Enter here the place you want to cache your data\nmy_local_cache = os.path.expanduser('~/ddir/my_sources')\n```\n\n\n```python\nCachedFuncReader = mk_cached_store(FuncReader, QuickPickleStore(my_local_cache))\n```\n\n\n```python\ndata_sources = CachedFuncReader([country_flag_image_url, \n kaggle_coronavirus_dataset, \n city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ncovid_datasets = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\nlist(covid_datasets)\n```\n\n\n\n\n ['COVID19_line_list_data.csv',\n 'COVID19_open_line_list.csv',\n 'covid_19_data.csv',\n 'time_series_covid_19_confirmed.csv',\n 'time_series_covid_19_confirmed_US.csv',\n 'time_series_covid_19_deaths.csv',\n 'time_series_covid_19_deaths_US.csv',\n 'time_series_covid_19_recovered.csv']\n\n\n\n\n```python\ncovid_datasets['time_series_covid_19_confirmed.csv'].head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Province/StateCountry/RegionLatLong1/22/201/23/201/24/201/25/201/26/201/27/20...3/24/203/25/203/26/203/27/203/28/203/29/203/30/203/31/204/1/204/2/20
0NaNAfghanistan33.000065.0000000000...748494110110120170174237273
1NaNAlbania41.153320.1683000000...123146174186197212223243259277
2NaNAlgeria28.03391.6596000000...264302367409454511584716847986
3NaNAndorra42.50631.5218000000...164188224267308334370376390428
4NaNAngola-11.202717.8739000000...3344577788
\n

5 rows \u00d7 76 columns

\n
\n\n\n\n\n```python\ncountry_flag_image_url = data_sources['country_flag_image_url']\ncountry_flag_image_url.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
\n
\n\n\n\n\n```python\nfrom IPython.display import Image\nflag_image_url_of_country = country_flag_image_url.set_index('country')['flag_image_url']\nImage(url=flag_image_url_of_country['Australia'])\n```\n\n\n\n\n\n\n\n\n### Update coronavirus data\n\n\n```python\n# To update the coronavirus data:\ndef update_covid_data(data_sources):\n \"\"\"update the coronavirus data\"\"\"\n if 'kaggle_coronavirus_dataset' in data_sources._caching_store:\n del data_sources._caching_store['kaggle_coronavirus_dataset'] # delete the cached item\n _ = data_sources['kaggle_coronavirus_dataset']\n\n# update_covid_data(data_sources) # uncomment here when you want to update\n```\n\n### Prepare data for flourish upload\n\n\n```python\nimport re\n\ndef print_if_verbose(verbose, *args, **kwargs):\n if verbose:\n print(*args, **kwargs)\n \ndef country_data_for_data_kind(data_sources, kind='confirmed', skip_first_days=0, verbose=False):\n \"\"\"kind can be 'confirmed', 'deaths', 'confirmed_US', 'confirmed_US', 'recovered'\"\"\"\n \n covid_datasets = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\n \n df = covid_datasets[f'time_series_covid_19_{kind}.csv']\n # df = s['time_series_covid_19_deaths.csv']\n if 'Province/State' in df.columns:\n df.loc[df['Province/State'].isna(), 'Province/State'] = 'n/a' # to avoid problems arising from NaNs\n\n print_if_verbose(verbose, f\"Before data shape: {df.shape}\")\n\n # drop some columns we don't need\n p = re.compile('\\d+/\\d+/\\d+')\n\n assert all(isinstance(x, str) for x in df.columns)\n date_cols = [x for x in df.columns if p.match(x)]\n if not kind.endswith('US'):\n df = df.loc[:, ['Country/Region'] + date_cols]\n # group countries and sum up the contributions of their states/regions/pargs\n df['country'] = df.pop('Country/Region')\n df = df.groupby('country').sum()\n else:\n df = df.loc[:, ['Province_State'] + date_cols]\n df['state'] = df.pop('Province_State')\n df = df.groupby('state').sum()\n\n \n print_if_verbose(verbose, f\"After data shape: {df.shape}\")\n df = df.iloc[:, skip_first_days:]\n \n if not kind.endswith('US'):\n # Joining with the country image urls and saving as an xls\n country_image_url = country_flag_image_url_prep(data_sources['country_flag_image_url'])\n t = df.copy()\n t.columns = [str(x)[:10] for x in t.columns]\n t = t.reset_index(drop=False)\n t = country_image_url.merge(t, how='outer')\n t = t.set_index('country')\n df = t\n else: \n pass\n\n return df\n\n\ndef mk_and_save_country_data_for_data_kind(data_sources, kind='confirmed', skip_first_days=0, verbose=False):\n t = country_data_for_data_kind(data_sources, kind, skip_first_days, verbose)\n filepath = f'country_covid_{kind}.xlsx'\n t.to_excel(filepath)\n print_if_verbose(verbose, f\"Was saved here: {filepath}\")\n\n```\n\n\n```python\n# for kind in ['confirmed', 'deaths', 'recovered', 'confirmed_US', 'deaths_US']:\nfor kind in ['confirmed', 'deaths', 'recovered', 'confirmed_US', 'deaths_US']:\n mk_and_save_country_data_for_data_kind(data_sources, kind=kind, skip_first_days=39, verbose=True)\n```\n\n Before data shape: (262, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_confirmed.xlsx\n Before data shape: (262, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_deaths.xlsx\n Before data shape: (248, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_recovered.xlsx\n Before data shape: (3253, 86)\n After data shape: (58, 75)\n Was saved here: country_covid_confirmed_US.xlsx\n Before data shape: (3253, 87)\n After data shape: (58, 75)\n Was saved here: country_covid_deaths_US.xlsx\n\n\n### Upload to Flourish, tune, and publish\n\nGo to https://public.flourish.studio/, get a free account, and play.\n\nGot to https://app.flourish.studio/templates\n\nChoose \"Bar chart race\". At the time of writing this, it was here: https://app.flourish.studio/visualisation/1706060/\n\n... and then play with the settings\n\n\n## Discussion of the methods\n\n\n```python\nfrom py2store import *\nfrom IPython.display import Image\n```\n\n### country flags images\n\nThe manual data prep looks something like this.\n\n\n```python\nimport pandas as pd\n\n# get the csv data from the url\ncountry_image_url_source = \\\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv'\ncountry_image_url = pd.read_csv(country_image_url_source)\n\n# delete the region col (we don't need it)\ndel country_image_url['region']\n\n# rewriting a few (not all) of the country names to match those found in kaggle covid data\n# Note: The list is not complete! Add to it as needed\n# TODO: (Wishful) Using a general smart soft-matching algorithm to do this automatically.\n# TODO: This could use edit-distance, synonyms, acronym generation, etc.\nold_and_new = [('USA', 'US'), \n ('Iran, Islamic Rep.', 'Iran'), \n ('UK', 'United Kingdom'), \n ('Korea, Rep.', 'Korea, South')]\nfor old, new in old_and_new:\n country_image_url['country'] = country_image_url['country'].replace(old, new)\n\nimage_url_of_country = country_image_url.set_index('country')['flag_image_url']\n\ncountry_image_url.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryflag_image_url
0Angolahttps://www.countryflags.io/ao/flat/64.png
1Burundihttps://www.countryflags.io/bi/flat/64.png
2Beninhttps://www.countryflags.io/bj/flat/64.png
3Burkina Fasohttps://www.countryflags.io/bf/flat/64.png
4Botswanahttps://www.countryflags.io/bw/flat/64.png
\n
\n\n\n\n\n```python\nImage(url=image_url_of_country['Australia'])\n```\n\n\n\n\n\n\n\n\n### Caching the flag images data\n\nDownloading our data sources every time we need them is not sustainable. What if they're big? What if you're offline or have slow internet (yes, dear future reader, even in the US, during coronavirus times!)?\n\nCaching. A \"cache aside\" read-cache. That's the word. py2store has tools for that (most of which are are caching.py). \n\nSo let's say we're going to have a local folder where we'll store various datas we download. The principle is as follows:\n\n\n```python\nfrom py2store.caching import mk_cached_store\n\nclass TheSource(dict): ...\nthe_cache = {}\nTheCacheSource = mk_cached_store(TheSource, the_cache)\n\nthe_source = TheSource({'green': 'eggs', 'and': 'ham'})\n\nthe_cached_source = TheCacheSource(the_source)\nprint(f\"the_cache: {the_cache}\")\nprint(f\"Getting green...\")\nthe_cached_source['green']\nprint(f\"the_cache: {the_cache}\")\nprint(\"... so the next time the_cached_source will get it's green from that the_cache\")\n```\n\n the_cache: {}\n Getting green...\n the_cache: {'green': 'eggs'}\n ... so the next time the_cached_source will get it's green from that the_cache\n\n\nBut now, you'll notice a slight problem ahead. What exactly does our source store (or rather reader) looks like? In it's raw form it would take urls as it's keys, and the response of a request as it's value. That store wouldn't have an `__iter__` for sure (unless you're Google). But more to the point here, the `mk_cached_store` tool uses the same key for the source and the cache, and we can't just use the url as is, to be a local file path. \n\nThere's many ways we could solve this. One way is to add a key map layer on the cache store, so externally, it speaks the url key language, but internally it will map that url to a valid local file path. We've been there, we got the T-shirt!\n\nBut what we're going to do is a bit different: We're going to do the key mapping in the source store itself. It seems to make more sense in our context: We have a data source of `name: data` pairs, and if we impose that the name should be a valid file name, we don't need to have a key map in the cache store.\n\nSo let's start by building this `MyDataStore` store. We'll start by defining the functions that get us the data we want. \n\n\n```python\ndef country_flag_image_url():\n import pandas as pd\n return pd.read_csv(\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv')\n\ndef kaggle_coronavirus_dataset():\n import kaggle\n from io import BytesIO\n # didn't find the pure binary download function, so using temp dir to emulate\n from tempfile import mkdtemp \n download_dir = mkdtemp()\n filename = 'novel-corona-virus-2019-dataset.zip'\n zip_file = os.path.join(download_dir, filename)\n \n dataset = 'sudalairajkumar/novel-corona-virus-2019-dataset'\n kaggle.api.dataset_download_files(dataset, download_dir)\n with open(zip_file, 'rb') as fp:\n b = fp.read()\n return BytesIO(b)\n\ndef city_population_in_time():\n import pandas as pd\n return pd.read_csv(\n 'https://gist.githubusercontent.com/johnburnmurdoch/'\n '4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'\n )\n```\n\nNow we can make a store that simply uses these function names as the keys, and their returned value as the values.\n\n\n```python\nfrom py2store.base import KvReader\nfrom functools import lru_cache\n\nclass FuncReader(KvReader):\n _getitem_cache_size = 999\n def __init__(self, funcs):\n # TODO: assert no free arguments (arguments are allowed but must all have defaults)\n self.funcs = funcs\n self._func_of_name = {func.__name__: func for func in funcs}\n\n def __contains__(self, k):\n return k in self._func_of_name\n \n def __iter__(self):\n yield from self._func_of_name\n \n def __len__(self):\n return len(self._func_of_name)\n\n @lru_cache(maxsize=_getitem_cache_size)\n def __getitem__(self, k):\n return self._func_of_name[k]() # call the func\n \n def __hash__(self):\n return 1\n \n```\n\n\n```python\ndata_sources = FuncReader([country_flag_image_url, kaggle_coronavirus_dataset, city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['city_population_in_time']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
namegroupyearvaluesubGroupcity_idlastValuelatlon
0AgraIndia1575200.0IndiaAgra - India200.027.1833378.01667
1AgraIndia1576212.0IndiaAgra - India200.027.1833378.01667
2AgraIndia1577224.0IndiaAgra - India212.027.1833378.01667
3AgraIndia1578236.0IndiaAgra - India224.027.1833378.01667
4AgraIndia1579248.0IndiaAgra - India236.027.1833378.01667
..............................
6247VijayanagarIndia1561480.0IndiaVijayanagar - India480.015.3350076.46200
6248VijayanagarIndia1562480.0IndiaVijayanagar - India480.015.3350076.46200
6249VijayanagarIndia1563480.0IndiaVijayanagar - India480.015.3350076.46200
6250VijayanagarIndia1564480.0IndiaVijayanagar - India480.015.3350076.46200
6251VijayanagarIndia1565480.0IndiaVijayanagar - India480.015.3350076.46200
\n

6252 rows \u00d7 9 columns

\n
\n\n\n\nBut we wanted this all to be cached locally, right? So a few more lines to do that!\n\n\n```python\nfrom py2store.caching import mk_cached_store\nfrom py2store import QuickPickleStore\n \nmy_local_cache = os.path.expanduser('~/ddir/my_sources')\n\nCachedFuncReader = mk_cached_store(FuncReader, QuickPickleStore(my_local_cache))\n```\n\n\n```python\ndata_sources = CachedFuncReader([country_flag_image_url, kaggle_coronavirus_dataset, city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['city_population_in_time']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
namegroupyearvaluesubGroupcity_idlastValuelatlon
0AgraIndia1575200.0IndiaAgra - India200.027.1833378.01667
1AgraIndia1576212.0IndiaAgra - India200.027.1833378.01667
2AgraIndia1577224.0IndiaAgra - India212.027.1833378.01667
3AgraIndia1578236.0IndiaAgra - India224.027.1833378.01667
4AgraIndia1579248.0IndiaAgra - India236.027.1833378.01667
..............................
6247VijayanagarIndia1561480.0IndiaVijayanagar - India480.015.3350076.46200
6248VijayanagarIndia1562480.0IndiaVijayanagar - India480.015.3350076.46200
6249VijayanagarIndia1563480.0IndiaVijayanagar - India480.015.3350076.46200
6250VijayanagarIndia1564480.0IndiaVijayanagar - India480.015.3350076.46200
6251VijayanagarIndia1565480.0IndiaVijayanagar - India480.015.3350076.46200
\n

6252 rows \u00d7 9 columns

\n
\n\n\n\n\n```python\nz = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\nlist(z)\n```\n", "long_description_content_type": "text/markdown", "description_file": "README.md", "root_url": "https://github.com/thorwhalen", "description": "A medley of things that got coded because there was an itch to do so", "author": "thorwhalen", "license": "Apache Software License", "description-file": "README.md", "install_requires": [], "keywords": [ "documentation", "packaging", "publishing" ] }/usr/lib/python3.13/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'description_file' warnings.warn(msg) /usr/lib/python3.13/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'root_url' warnings.warn(msg) /usr/lib/python3.13/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'description-file' warnings.warn(msg) /usr/lib/python3.13/site-packages/setuptools/dist.py:476: SetuptoolsDeprecationWarning: Invalid dash-separated options !! ******************************************************************************** Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead. This deprecation is overdue, please update your project and remove deprecated calls to avoid build errors in the future. See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details. ******************************************************************************** !! opt = self.warn_dash_deprecation(opt, section) -------------------------------------------------------------------- running dist_info creating /builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/.pyproject-builddir/pip-modern-metadata-azez61tt/tapyoca.egg-info writing /builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/.pyproject-builddir/pip-modern-metadata-azez61tt/tapyoca.egg-info/PKG-INFO writing dependency_links to /builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/.pyproject-builddir/pip-modern-metadata-azez61tt/tapyoca.egg-info/dependency_links.txt writing top-level names to /builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/.pyproject-builddir/pip-modern-metadata-azez61tt/tapyoca.egg-info/top_level.txt writing manifest file '/builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/.pyproject-builddir/pip-modern-metadata-azez61tt/tapyoca.egg-info/SOURCES.txt' reading manifest file '/builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/.pyproject-builddir/pip-modern-metadata-azez61tt/tapyoca.egg-info/SOURCES.txt' adding license file 'LICENSE' writing manifest file '/builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/.pyproject-builddir/pip-modern-metadata-azez61tt/tapyoca.egg-info/SOURCES.txt' creating '/builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/.pyproject-builddir/pip-modern-metadata-azez61tt/tapyoca-0.0.4.dist-info' Preparing metadata (pyproject.toml): finished with status 'done' Building wheels for collected packages: tapyoca Building wheel for tapyoca (pyproject.toml): started Running command Building wheel for tapyoca (pyproject.toml) !!!! containing_folder_name=tapyoca-0.0.4 but setup name is tapyoca Setup params ------------------------------------------------------- { "name": "tapyoca", "version": "0.0.4", "url": "https://github.com/thorwhalen/tapyoca", "packages": [ "tapyoca", "tapyoca.agglutination", "tapyoca.covid", "tapyoca.darpa", "tapyoca.demonyms", "tapyoca.indexing_podcasts", "tapyoca.parquet_deformations", "tapyoca.phoneming" ], "include_package_data": true, "platforms": "any", "long_description": "# tapyoca\nA medley of small projects\n\n\n# parquet_deformations\n\nI'm calling these [Parquet deformations](https://www.theguardian.com/artanddesign/alexs-adventures-in-numberland/2014/sep/09/crazy-paving-the-twisted-world-of-parquet-deformations#:~:text=In%20the%201960s%20an%20American,the%20regularity%20of%20the%20tiling.) but purest would lynch me. \n\nReally, I just wanted to transform one word into another word, gradually, as I've seen in some of [Escher's](https://en.wikipedia.org/wiki/M._C._Escher) work, so I looked it up, and saw that it's called parquet deformations. The math looked enticing, but I had no time for that, so I did the first way I could think of: Mapping pixels to pixels (in some fashion -- but nearest neighbors is the method that yields nicest results, under the pixel-level restriction). \n\nOf course, this can be applied to any image (that will be transformed to B/W (not even gray -- I mean actual B/W), and there's several ways you can perform the parquet (I like the gif rendering). \n\nThe main function (exposed as a script) is `mk_deformation_image`. All you need is to specify two images (or words). If you want, of course, you can specify:\n- `n_steps`: Number of steps from start to end image\n- `save_to_file`: path to file to save too (if not given, will just return the image object)\n- `kind`: 'gif', 'horizontal_stack', or 'vertical_stack'\n- `coordinate_mapping_maker`: A function that will return the mapping between start and end. \nThis function should return a pair (`from_coord`, `to_coord`) of aligned matrices whose 2 columns are the the \n`(x, y)` coordinates, and the rows represent aligned positions that should be mapped. \n\n\n\n## Examples\n\n### Two words...\n\n\n```python\nfit_to_size = 400\nstart_im = image_of_text('sensor').rotate(90, expand=1)\nend_im = image_of_text('meaning').rotate(90, expand=1)\nstart_and_end_image(start_im, end_im)\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_5_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 15, kind='h').resize((500,200))\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_6_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im.transpose(4), end_im.transpose(4), 5, kind='v').resize((200,200))\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_7_0.png)\n\n\n\n\n```python\nf = 'sensor_meaning_knn.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nf = 'sensor_meaning_scan.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f, \n coordinate_mapping_maker='scan')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nf = 'sensor_meaning_random.gif'\nmk_deformation_image(start_im.transpose(4), end_im.transpose(4), n_steps=20, save_to_file=f, \n coordinate_mapping_maker='random')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n### From a list of words\n\n\n```python\nstart_words = ['sensor', 'vibration', 'tempature']\nend_words = ['sense', 'meaning', 'detection']\nstart_im, end_im = make_start_and_end_images_with_words(\n start_words, end_words, perm=True, repeat=2, size=150)\nstart_and_end_image(start_im, end_im).resize((600, 200))\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_12_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 5)\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_13_0.png)\n\n\n\n\n```python\nf = 'bunch_of_words.gif'\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n## From files\n\n\n```python\nstart_im = Image.open('sensor_strip_01.png')\nend_im = Image.open('sense_strip_01.png')\nstart_and_end_image(start_im.resize((200, 500)), end_im.resize((200, 500)))\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_16_0.png)\n\n\n\n\n```python\nim = mk_deformation_image(start_im, end_im, 7)\nim\n```\n\n\n\n\n![png](tapyoca/parquet_deformations/img/outputs/output_17_0.png)\n\n\n\n\n```python\nf = 'medley.gif'\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f)\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n\n```python\nmk_deformation_image(start_im, end_im, n_steps=20, save_to_file=f, coordinate_mapping_maker='scan')\ndisplay_gif(f)\n```\n\n\n\n\n\n\n\n\n## an image and some text\n\n\n```python\nstart_im = 'img/waveform_01.png' # will first look for a file, and if not consider as text\nend_im = 'makes sense'\n\nmk_gif_of_deformations(start_im, end_im, n_steps=20, \n save_to_file='image_and_text.gif')\ndisplay_gif('image_and_text.gif') \n```\n\n\n\n\n\n\n\n\n\n\n\n# demonys\n\n## What do we think about other peoples?\n\nThis project is meant to get an idea of what people think of people for different nations, as seen by what they ask google about them. \n\nHere I use python code to acquire, clean up, and analyze the data. \n\n### Demonym\n\nIf you're like me and enjoy the false and fleeting impression of superiority that comes when you know a word someone else doesn't. If you're like me and go to parties for the sole purpose of seeking victims to get a one-up on, here's a cool word to add to your arsenal:\n\n**demonym**: a noun used to denote the natives or inhabitants of a particular country, state, city, etc.\n_\"he struggled for the correct demonym for the people of Manchester\"_\n\n### Back-story of this analysis\n \nDuring a discussion (about traveling in Europe) someone said \"why are the swiss so miserable\". Now, I wouldn't say that the swiss were especially miserable (a couple of ex-girlfriends aside), but to be fair he was contrasting with Italians, so perhaps he has a point. I apologize if you are swiss, or one of the two ex-girlfriends -- nothing personal, this is all for effect. \n\nWe googled \"why are the swiss so \", and sure enough, \"why are the swiss so miserable\" came up as one of the suggestions. So we got curious and started googling other peoples: the French, the Germans, etc.\n\nThat's the back-story of this analysis. This analysis is meant to get an idea of what we think of peoples from other countries. Of course, one can rightfully critique the approach I'll take to gauge \"what we think\" -- all three of these words should, but will not, be defined. I'm just going to see what google's *current* auto-suggest comes back with when I enter \"why are the X so \" (where X will be a noun that denotes the natives of inhabitants of a particular country; a *demonym* if you will). \n\n### Warning\n\nAgain, word of warning: All data and analyses are biased. \nTake everything you'll read here (and to be fair, what you read anywhere) with a grain of salt. \nFor simplicitly I'll saying things like \"what we think of...\" or \"who do we most...\", etc.\nBut I don't **really** mean that.\n\n### Resources\n\n* http://www.geography-site.co.uk/pages/countries/demonyms.html for my list of demonyms.\n* google for my suggestion engine, using the url prefix: `http://suggestqueries.google.com/complete/search?client=chrome&q=`\n\n\n## The results\n\n### In a nutshell\n\nBelow is listed 73 demonyms along with words extracted from the very first google suggestion when you type. \n\n`why are the DEMONYM so `\n\n```text\nafghan \t eyes beautiful\nalbanian \t beautiful\namerican \t girl dolls expensive\naustralian\t tall\nbelgian \t fries good\nbhutanese \t happy\nbrazilian \t good at football\nbritish \t full of grief and despair\nbulgarian \t properties cheap\nburmese \t cats affectionate\ncambodian \t cows skinny\ncanadian \t nice\nchinese \t healthy\ncolombian \t avocados big\ncuban \t cigars good\nczech \t tall\ndominican \t republic and haiti different\negyptian \t gods important\nenglish \t reserved\neritrean \t beautiful\nethiopian \t beautiful\nfilipino \t proud\nfinn \t shoes expensive\nfrench \t healthy\ngerman \t tall\ngreek \t gods messed up\nhaitian \t parents strict\nhungarian \t words long\nindian \t tv debates chaotic\nindonesian\t smart\niranian \t beautiful\nisraeli \t startups successful\nitalian \t short\njamaican \t sprinters fast\njapanese \t polite\nkenyan \t runners good\nlebanese \t rich\nmalagasy \t names long\nmalaysian \t drivers bad\nmaltese \t rude\nmongolian \t horses small\nmoroccan \t rugs expensive\nnepalese \t beautiful\nnigerian \t tall\nnorth korean\t hats big\nnorwegian \t flights cheap\npakistani \t fair\nperuvian \t blueberries big\npole \t vaulters hot\nportuguese\t short\npuerto rican\t and cuban flags similar\nromanian \t beautiful\nrussian \t good at math\nsamoan \t big\nsaudi \t arrogant\nscottish \t bitter\nsenegalese\t tall\nserbian \t tall\nsingaporean\t rude\nsomali \t parents strict\nsouth african\t plugs big\nsouth korean\t tall\nsri lankan\t dark\nsudanese \t tall\nswiss \t good at making watches\nsyrian \t families large\ntaiwanese \t pretty\nthai \t pretty\ntongan \t big\nukrainian \t beautiful\nvietnamese\t fiercely nationalistic\nwelsh \t dark\nzambian \t emeralds cheap\n```\n\n\nNotes:\n* The queries actually have a space after the \"so\", which matters so as to omit suggestions containing words that start with so.\n* Only the tail of the suggestion is shown -- minus prefix (`why are the DEMONYM` or `why are DEMONYM`) as well as the `so`, where ever it lands in the suggestion. \nFor example, the first suggestion for the american demonym was \"why are american dolls so expensive\", which results in the \"dolls expensive\" association. \n\n\n### Who do we most talk/ask about?\n\nThe original list contained 217 demonyms, but many of these yielded no suggestions (to the specific query format I used, that is). \nOnly 73 demonyms gave me at least one suggestion. \nBut within those, number of suggestions range between 1 and 20 (which is probably the default maximum number of suggestions for the API I used). \nSo, pretending that the number of suggestions is an indicator of how much we have to say, or how many different opinions we have, of each of the covered nationalities, \nhere's the top 15 demonyms people talk about, with the corresponding number of suggestions \n(proxy for \"the number of different things people ask about the said nationality). \n\n```text\nfrench 20\nsingaporean 20\ngerman 20\nbritish 20\nswiss 20\nenglish 19\nitalian 18\ncuban 18\ncanadian 18\nwelsh 18\naustralian 17\nmaltese 16\namerican 16\njapanese 14\nscottish 14\n```\n\n### Who do we least talk/ask about?\n\nConversely, here are the 19 demonyms that came back with only one suggestion.\n\n```text\nsomali 1\nbhutanese 1\nsyrian 1\ntongan 1\ncambodian 1\nmalagasy 1\nsaudi 1\nserbian 1\nczech 1\neritrean 1\nfinn 1\npuerto rican 1\npole 1\nhaitian 1\nhungarian 1\nperuvian 1\nmoroccan 1\nmongolian 1\nzambian 1\n```\n\n### What do we think about people?\n\nWhy are the French so...\n\nHow would you (if you're (un)lucky enough to know the French) finish this sentence?\nYou might even have several opinions about the French, and any other group of people you've rubbed shoulders with.\nWhat words would your palette contain to describe different nationalities?\nWhat words would others (at least those that ask questions to google) use?\n\nWell, here's what my auto-suggest search gave me. A set of 357 unique words and expressions to describe the 72 nationalities. \nSo a long tail of words use only for one nationality. But some words occur for more than one nationality. \nHere are the top 12 words/expressions used to describe people of the world. \n\n```text\nbeautiful 11\ntall 11\nshort 9\nnames long 8\nproud 8\nparents strict 8\nsmart 8\nnice 7\nboring 6\nrich 5\ndark 5\nsuccessful 5\n```\n\n### Who is beautiful? Who is tall? Who is short? Who is smart?\n\n```text\nbeautiful : albanian, eritrean, ethiopian, filipino, iranian, lebanese, nepalese, pakistani, romanian, ukrainian, vietnamese\ntall : australian, czech, german, nigerian, pakistani, samoan, senegalese, serbian, south korean, sudanese, taiwanese\nshort : filipino, indonesian, italian, maltese, nepalese, pakistani, portuguese, singaporean, welsh\nnames long : indian, malagasy, nigerian, portuguese, russian, sri lankan, thai, welsh\nproud : albanian, ethiopian, filipino, iranian, lebanese, portuguese, scottish, welsh\nparents strict : albanian, ethiopian, haitian, indian, lebanese, pakistani, somali, sri lankan\nsmart : indonesian, iranian, lebanese, pakistani, romanian, singaporean, taiwanese, vietnamese\nnice : canadian, english, filipino, nepalese, portuguese, taiwanese, thai\nboring : british, english, french, german, singaporean, swiss\nrich : lebanese, pakistani, singaporean, taiwanese, vietnamese\ndark : filipino, senegalese, sri lankan, vietnamese, welsh\nsuccessful : chinese, english, japanese, lebanese, swiss\n```\n\n## How did I do it?\n\nI scraped a list of (country, demonym) pairs from a table in http://www.geography-site.co.uk/pages/countries/demonyms.html.\n\nThen I diagnosed these and manually made a mapping to simplify some \"complex\" entries, \nsuch as mapping an entry such as \"Irishman or Irishwoman or Irish\" to \"Irish\".\n\nUsing the google suggest API (http://suggestqueries.google.com/complete/search?client=chrome&q=), I requested what the suggestions \nfor `why are the $demonym so ` query pattern, for `$demonym` running through all 217 demonyms from the list above, \nstoring the results for each if the results were non-empty. \n\nThen, it was just a matter of pulling this data into memory, formatting it a bit, and creating a pandas dataframe that I could then interrogate.\n \n## Resources you can find here\n\nThe code to do this analysis yourself, from scratch here: `data_acquisition.py`.\n\nThe jupyter notebook I actually used when I developed this: `01 - Demonyms and adjectives - why are the french so....ipynb`\n \nNote you'll need to pip install py2store if you haven't already.\n\nIn the `data` folder you'll find\n* country_demonym.p: A pickle of a dataframe of countries and corresponding demonyms\n* country_demonym.xlsx: The same as above, but in excel form\n* demonym_suggested_characteristics.p: A pickle of 73 demonyms and auto-suggestion information, including characteristics. \n* what_we_think_about_demonyns.xlsx: An excel containing various statistics about demonyms and their (perceived) characteristics\n \n\n\n\n\n\n# Agglutinations\n\nInspired from a [tweet](https://twitter.com/raymondh/status/1311003482531401729) from Raymond Hettinger this morning:\n\n_Resist the urge to elide the underscore in multiword function or method names_\n\nSo I wondered...\n\n## Gluglus\n\nThe gluglu of a word is the number of partitions you can make of that word into words (of length at least 2 (so no using a or i)).\n(No \"gluglu\" isn't an actual term -- unless everyone starts using it from now on. \nBut it was inspired from an actual [linguistic term](https://en.wikipedia.org/wiki/Agglutination).)\n\nFor example, the gluglu of ``newspaper`` is 4:\n\n```\nnewspaper\n new spa per\n news pa per\n news paper\n```\n\nEvery (valid) word has gluglu at least 1.\n\n\n## How many standard library names have gluglus at last 2?\n\n108\n\nHere's [the list](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/standard_lib_gluglus.txt) of all of them.\n\nThe winner has a gluglu of 6 (not 7 because formatannotationrelativeto isn't in the dictionary)\n\n```\nformatannotationrelativeto\n\tfor mat an not at ion relative to\n\tfor mat annotation relative to\n\tform at an not at ion relative to\n\tform at annotation relative to\n\tformat an not at ion relative to\n\tformat annotation relative to\n```\n\n## Details\n\n### Dictionary\n\nReally it depends on what dictionary we use. \nHere, I used a very conservative one. \nThe intersection of two lists: The [corncob](http://www.mieliestronk.com/corncob_lowercase.txt) \nand the [google10000](https://raw.githubusercontent.com/first20hours/google-10000-english/master/google-10000-english-usa.txt) word lists.\nAdditionally, I only kept of those, those that had at least 2 letters, and had only letters (no hyphens or disturbing diacritics).\n\nDiacritics. Look it up. Impress your next nerd date.\n\nIm left with 8116 words. You can find them [here](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/words_8116.csv).\n\n### Standard Lib Names\n\nSurprisingly, that was the hardest part. I know I'm missing some, but that's enough rabbit-holing. \n\nWhat I did (modulo some exceptions I won't look into) was to walk the standard lib modules (even that list wasn't a given!) \nextracting (recursively( the names of any (non-underscored) attributes if they were modules or callables, \nas well as extracting the arguments of these callables (when they had signatures).\n\nYou can find the code I used to extract these names [here](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/py_names.py) \nand the actual list [there](https://github.com/thorwhalen/tapyoca/blob/master/tapyoca/agglutination/standard_lib_module_names.csv).\n\n\n\n# covid\n\n## Bar Chart Races (applied to covid-19 spread)\n\nThe module will show is how to make these:\n- Confirmed cases (by country): https://public.flourish.studio/visualisation/1704821/\n- Deaths (by country): https://public.flourish.studio/visualisation/1705644/\n- US Confirmed cases (by state): https://public.flourish.studio/visualisation/1794768/\n- US Deaths (by state): https://public.flourish.studio/visualisation/1794797/\n\n### The script\n\nIf you just want to run this as a script to get the job done, you have one here: \nhttps://raw.githubusercontent.com/thorwhalen/tapyoca/master/covid/covid_bar_chart_race.py\n\nRun like this\n```\n$ python covid_bar_chart_race.py -h\nusage: covid_bar_chart_race.py [-h] {mk-and-save-covid-data,update-covid-data,instructions-to-make-bar-chart-race} ...\n\npositional arguments:\n {mk-and-save-covid-data,update-covid-data,instructions-to-make-bar-chart-race}\n mk-and-save-covid-data\n :param data_sources: Dirpath or py2store Store where the data is :param kinds: The kinds of data you want to compute and save :param\n skip_first_days: :param verbose: :return:\n update-covid-data update the coronavirus data\n instructions-to-make-bar-chart-race\n\noptional arguments:\n -h, --help show this help message and exit\n ```\n \n \n### The jupyter notebook\n\nThe notebook (the .ipynb file) shows you how to do it step by step in case you want to reuse the methods for other stuff.\n\n\n\n## Getting and preparing the data\n\nCorona virus data here: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset (direct download: https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset/download). It's currently updated daily, so download a fresh copy if you want.\n\nPopulation data here: http://api.worldbank.org/v2/en/indicator/SP.POP.TOTL?downloadformat=csv\n\nIt comes under the form of a zip file (currently named `novel-corona-virus-2019-dataset.zip` with several `.csv` files in them. We use `py2store` (To install: `pip install py2store`. Project lives here: https://github.com/i2mint/py2store) to access and pre-prepare it. It allows us to not have to unzip the file and replace the older folder with it every time we download a new one. It also gives us the csvs as `pandas.DataFrame` already. \n\n\n```python\nimport pandas as pd\nfrom io import BytesIO\nfrom py2store import kv_wrap, ZipReader # google it and pip install it\nfrom py2store.caching import mk_cached_store\nfrom py2store import QuickPickleStore\nfrom py2store.sources import FuncReader\n\ndef country_flag_image_url():\n import pandas as pd\n return pd.read_csv(\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv')\n\ndef kaggle_coronavirus_dataset():\n import kaggle\n from io import BytesIO\n # didn't find the pure binary download function, so using temp dir to emulate\n from tempfile import mkdtemp \n download_dir = mkdtemp()\n filename = 'novel-corona-virus-2019-dataset.zip'\n zip_file = os.path.join(download_dir, filename)\n \n dataset = 'sudalairajkumar/novel-corona-virus-2019-dataset'\n kaggle.api.dataset_download_files(dataset, download_dir)\n with open(zip_file, 'rb') as fp:\n b = fp.read()\n return BytesIO(b)\n\ndef city_population_in_time():\n import pandas as pd\n return pd.read_csv(\n 'https://gist.githubusercontent.com/johnburnmurdoch/'\n '4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'\n )\n\ndef country_flag_image_url_prep(df: pd.DataFrame):\n # delete the region col (we don't need it)\n del df['region']\n # rewriting a few (not all) of the country names to match those found in kaggle covid data\n # Note: The list is not complete! Add to it as needed\n old_and_new = [('USA', 'US'), \n ('Iran, Islamic Rep.', 'Iran'), \n ('UK', 'United Kingdom'), \n ('Korea, Rep.', 'Korea, South')]\n for old, new in old_and_new:\n df['country'] = df['country'].replace(old, new)\n\n return df\n\n\n@kv_wrap.outcoming_vals(lambda x: pd.read_csv(BytesIO(x))) # this is to format the data as a dataframe\nclass ZippedCsvs(ZipReader):\n pass\n# equivalent to ZippedCsvs = kv_wrap.outcoming_vals(lambda x: pd.read_csv(BytesIO(x)))(ZipReader)\n```\n\n\n```python\n# Enter here the place you want to cache your data\nmy_local_cache = os.path.expanduser('~/ddir/my_sources')\n```\n\n\n```python\nCachedFuncReader = mk_cached_store(FuncReader, QuickPickleStore(my_local_cache))\n```\n\n\n```python\ndata_sources = CachedFuncReader([country_flag_image_url, \n kaggle_coronavirus_dataset, \n city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ncovid_datasets = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\nlist(covid_datasets)\n```\n\n\n\n\n ['COVID19_line_list_data.csv',\n 'COVID19_open_line_list.csv',\n 'covid_19_data.csv',\n 'time_series_covid_19_confirmed.csv',\n 'time_series_covid_19_confirmed_US.csv',\n 'time_series_covid_19_deaths.csv',\n 'time_series_covid_19_deaths_US.csv',\n 'time_series_covid_19_recovered.csv']\n\n\n\n\n```python\ncovid_datasets['time_series_covid_19_confirmed.csv'].head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Province/StateCountry/RegionLatLong1/22/201/23/201/24/201/25/201/26/201/27/20...3/24/203/25/203/26/203/27/203/28/203/29/203/30/203/31/204/1/204/2/20
0NaNAfghanistan33.000065.0000000000...748494110110120170174237273
1NaNAlbania41.153320.1683000000...123146174186197212223243259277
2NaNAlgeria28.03391.6596000000...264302367409454511584716847986
3NaNAndorra42.50631.5218000000...164188224267308334370376390428
4NaNAngola-11.202717.8739000000...3344577788
\n

5 rows \u00d7 76 columns

\n
\n\n\n\n\n```python\ncountry_flag_image_url = data_sources['country_flag_image_url']\ncountry_flag_image_url.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
\n
\n\n\n\n\n```python\nfrom IPython.display import Image\nflag_image_url_of_country = country_flag_image_url.set_index('country')['flag_image_url']\nImage(url=flag_image_url_of_country['Australia'])\n```\n\n\n\n\n\n\n\n\n### Update coronavirus data\n\n\n```python\n# To update the coronavirus data:\ndef update_covid_data(data_sources):\n \"\"\"update the coronavirus data\"\"\"\n if 'kaggle_coronavirus_dataset' in data_sources._caching_store:\n del data_sources._caching_store['kaggle_coronavirus_dataset'] # delete the cached item\n _ = data_sources['kaggle_coronavirus_dataset']\n\n# update_covid_data(data_sources) # uncomment here when you want to update\n```\n\n### Prepare data for flourish upload\n\n\n```python\nimport re\n\ndef print_if_verbose(verbose, *args, **kwargs):\n if verbose:\n print(*args, **kwargs)\n \ndef country_data_for_data_kind(data_sources, kind='confirmed', skip_first_days=0, verbose=False):\n \"\"\"kind can be 'confirmed', 'deaths', 'confirmed_US', 'confirmed_US', 'recovered'\"\"\"\n \n covid_datasets = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\n \n df = covid_datasets[f'time_series_covid_19_{kind}.csv']\n # df = s['time_series_covid_19_deaths.csv']\n if 'Province/State' in df.columns:\n df.loc[df['Province/State'].isna(), 'Province/State'] = 'n/a' # to avoid problems arising from NaNs\n\n print_if_verbose(verbose, f\"Before data shape: {df.shape}\")\n\n # drop some columns we don't need\n p = re.compile('\\d+/\\d+/\\d+')\n\n assert all(isinstance(x, str) for x in df.columns)\n date_cols = [x for x in df.columns if p.match(x)]\n if not kind.endswith('US'):\n df = df.loc[:, ['Country/Region'] + date_cols]\n # group countries and sum up the contributions of their states/regions/pargs\n df['country'] = df.pop('Country/Region')\n df = df.groupby('country').sum()\n else:\n df = df.loc[:, ['Province_State'] + date_cols]\n df['state'] = df.pop('Province_State')\n df = df.groupby('state').sum()\n\n \n print_if_verbose(verbose, f\"After data shape: {df.shape}\")\n df = df.iloc[:, skip_first_days:]\n \n if not kind.endswith('US'):\n # Joining with the country image urls and saving as an xls\n country_image_url = country_flag_image_url_prep(data_sources['country_flag_image_url'])\n t = df.copy()\n t.columns = [str(x)[:10] for x in t.columns]\n t = t.reset_index(drop=False)\n t = country_image_url.merge(t, how='outer')\n t = t.set_index('country')\n df = t\n else: \n pass\n\n return df\n\n\ndef mk_and_save_country_data_for_data_kind(data_sources, kind='confirmed', skip_first_days=0, verbose=False):\n t = country_data_for_data_kind(data_sources, kind, skip_first_days, verbose)\n filepath = f'country_covid_{kind}.xlsx'\n t.to_excel(filepath)\n print_if_verbose(verbose, f\"Was saved here: {filepath}\")\n\n```\n\n\n```python\n# for kind in ['confirmed', 'deaths', 'recovered', 'confirmed_US', 'deaths_US']:\nfor kind in ['confirmed', 'deaths', 'recovered', 'confirmed_US', 'deaths_US']:\n mk_and_save_country_data_for_data_kind(data_sources, kind=kind, skip_first_days=39, verbose=True)\n```\n\n Before data shape: (262, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_confirmed.xlsx\n Before data shape: (262, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_deaths.xlsx\n Before data shape: (248, 79)\n After data shape: (183, 75)\n Was saved here: country_covid_recovered.xlsx\n Before data shape: (3253, 86)\n After data shape: (58, 75)\n Was saved here: country_covid_confirmed_US.xlsx\n Before data shape: (3253, 87)\n After data shape: (58, 75)\n Was saved here: country_covid_deaths_US.xlsx\n\n\n### Upload to Flourish, tune, and publish\n\nGo to https://public.flourish.studio/, get a free account, and play.\n\nGot to https://app.flourish.studio/templates\n\nChoose \"Bar chart race\". At the time of writing this, it was here: https://app.flourish.studio/visualisation/1706060/\n\n... and then play with the settings\n\n\n## Discussion of the methods\n\n\n```python\nfrom py2store import *\nfrom IPython.display import Image\n```\n\n### country flags images\n\nThe manual data prep looks something like this.\n\n\n```python\nimport pandas as pd\n\n# get the csv data from the url\ncountry_image_url_source = \\\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv'\ncountry_image_url = pd.read_csv(country_image_url_source)\n\n# delete the region col (we don't need it)\ndel country_image_url['region']\n\n# rewriting a few (not all) of the country names to match those found in kaggle covid data\n# Note: The list is not complete! Add to it as needed\n# TODO: (Wishful) Using a general smart soft-matching algorithm to do this automatically.\n# TODO: This could use edit-distance, synonyms, acronym generation, etc.\nold_and_new = [('USA', 'US'), \n ('Iran, Islamic Rep.', 'Iran'), \n ('UK', 'United Kingdom'), \n ('Korea, Rep.', 'Korea, South')]\nfor old, new in old_and_new:\n country_image_url['country'] = country_image_url['country'].replace(old, new)\n\nimage_url_of_country = country_image_url.set_index('country')['flag_image_url']\n\ncountry_image_url.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryflag_image_url
0Angolahttps://www.countryflags.io/ao/flat/64.png
1Burundihttps://www.countryflags.io/bi/flat/64.png
2Beninhttps://www.countryflags.io/bj/flat/64.png
3Burkina Fasohttps://www.countryflags.io/bf/flat/64.png
4Botswanahttps://www.countryflags.io/bw/flat/64.png
\n
\n\n\n\n\n```python\nImage(url=image_url_of_country['Australia'])\n```\n\n\n\n\n\n\n\n\n### Caching the flag images data\n\nDownloading our data sources every time we need them is not sustainable. What if they're big? What if you're offline or have slow internet (yes, dear future reader, even in the US, during coronavirus times!)?\n\nCaching. A \"cache aside\" read-cache. That's the word. py2store has tools for that (most of which are are caching.py). \n\nSo let's say we're going to have a local folder where we'll store various datas we download. The principle is as follows:\n\n\n```python\nfrom py2store.caching import mk_cached_store\n\nclass TheSource(dict): ...\nthe_cache = {}\nTheCacheSource = mk_cached_store(TheSource, the_cache)\n\nthe_source = TheSource({'green': 'eggs', 'and': 'ham'})\n\nthe_cached_source = TheCacheSource(the_source)\nprint(f\"the_cache: {the_cache}\")\nprint(f\"Getting green...\")\nthe_cached_source['green']\nprint(f\"the_cache: {the_cache}\")\nprint(\"... so the next time the_cached_source will get it's green from that the_cache\")\n```\n\n the_cache: {}\n Getting green...\n the_cache: {'green': 'eggs'}\n ... so the next time the_cached_source will get it's green from that the_cache\n\n\nBut now, you'll notice a slight problem ahead. What exactly does our source store (or rather reader) looks like? In it's raw form it would take urls as it's keys, and the response of a request as it's value. That store wouldn't have an `__iter__` for sure (unless you're Google). But more to the point here, the `mk_cached_store` tool uses the same key for the source and the cache, and we can't just use the url as is, to be a local file path. \n\nThere's many ways we could solve this. One way is to add a key map layer on the cache store, so externally, it speaks the url key language, but internally it will map that url to a valid local file path. We've been there, we got the T-shirt!\n\nBut what we're going to do is a bit different: We're going to do the key mapping in the source store itself. It seems to make more sense in our context: We have a data source of `name: data` pairs, and if we impose that the name should be a valid file name, we don't need to have a key map in the cache store.\n\nSo let's start by building this `MyDataStore` store. We'll start by defining the functions that get us the data we want. \n\n\n```python\ndef country_flag_image_url():\n import pandas as pd\n return pd.read_csv(\n 'https://raw.githubusercontent.com/i2mint/examples/master/data/country_flag_image_url.csv')\n\ndef kaggle_coronavirus_dataset():\n import kaggle\n from io import BytesIO\n # didn't find the pure binary download function, so using temp dir to emulate\n from tempfile import mkdtemp \n download_dir = mkdtemp()\n filename = 'novel-corona-virus-2019-dataset.zip'\n zip_file = os.path.join(download_dir, filename)\n \n dataset = 'sudalairajkumar/novel-corona-virus-2019-dataset'\n kaggle.api.dataset_download_files(dataset, download_dir)\n with open(zip_file, 'rb') as fp:\n b = fp.read()\n return BytesIO(b)\n\ndef city_population_in_time():\n import pandas as pd\n return pd.read_csv(\n 'https://gist.githubusercontent.com/johnburnmurdoch/'\n '4199dbe55095c3e13de8d5b2e5e5307a/raw/fa018b25c24b7b5f47fd0568937ff6c04e384786/city_populations'\n )\n```\n\nNow we can make a store that simply uses these function names as the keys, and their returned value as the values.\n\n\n```python\nfrom py2store.base import KvReader\nfrom functools import lru_cache\n\nclass FuncReader(KvReader):\n _getitem_cache_size = 999\n def __init__(self, funcs):\n # TODO: assert no free arguments (arguments are allowed but must all have defaults)\n self.funcs = funcs\n self._func_of_name = {func.__name__: func for func in funcs}\n\n def __contains__(self, k):\n return k in self._func_of_name\n \n def __iter__(self):\n yield from self._func_of_name\n \n def __len__(self):\n return len(self._func_of_name)\n\n @lru_cache(maxsize=_getitem_cache_size)\n def __getitem__(self, k):\n return self._func_of_name[k]() # call the func\n \n def __hash__(self):\n return 1\n \n```\n\n\n```python\ndata_sources = FuncReader([country_flag_image_url, kaggle_coronavirus_dataset, city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['city_population_in_time']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
namegroupyearvaluesubGroupcity_idlastValuelatlon
0AgraIndia1575200.0IndiaAgra - India200.027.1833378.01667
1AgraIndia1576212.0IndiaAgra - India200.027.1833378.01667
2AgraIndia1577224.0IndiaAgra - India212.027.1833378.01667
3AgraIndia1578236.0IndiaAgra - India224.027.1833378.01667
4AgraIndia1579248.0IndiaAgra - India236.027.1833378.01667
..............................
6247VijayanagarIndia1561480.0IndiaVijayanagar - India480.015.3350076.46200
6248VijayanagarIndia1562480.0IndiaVijayanagar - India480.015.3350076.46200
6249VijayanagarIndia1563480.0IndiaVijayanagar - India480.015.3350076.46200
6250VijayanagarIndia1564480.0IndiaVijayanagar - India480.015.3350076.46200
6251VijayanagarIndia1565480.0IndiaVijayanagar - India480.015.3350076.46200
\n

6252 rows \u00d7 9 columns

\n
\n\n\n\nBut we wanted this all to be cached locally, right? So a few more lines to do that!\n\n\n```python\nfrom py2store.caching import mk_cached_store\nfrom py2store import QuickPickleStore\n \nmy_local_cache = os.path.expanduser('~/ddir/my_sources')\n\nCachedFuncReader = mk_cached_store(FuncReader, QuickPickleStore(my_local_cache))\n```\n\n\n```python\ndata_sources = CachedFuncReader([country_flag_image_url, kaggle_coronavirus_dataset, city_population_in_time])\nlist(data_sources)\n```\n\n\n\n\n ['country_flag_image_url',\n 'kaggle_coronavirus_dataset',\n 'city_population_in_time']\n\n\n\n\n```python\ndata_sources['country_flag_image_url']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countryregionflag_image_url
0AngolaAfricahttps://www.countryflags.io/ao/flat/64.png
1BurundiAfricahttps://www.countryflags.io/bi/flat/64.png
2BeninAfricahttps://www.countryflags.io/bj/flat/64.png
3Burkina FasoAfricahttps://www.countryflags.io/bf/flat/64.png
4BotswanaAfricahttps://www.countryflags.io/bw/flat/64.png
............
210Solomon IslandsOceaniahttps://www.countryflags.io/sb/flat/64.png
211TongaOceaniahttps://www.countryflags.io/to/flat/64.png
212TuvaluOceaniahttps://www.countryflags.io/tv/flat/64.png
213VanuatuOceaniahttps://www.countryflags.io/vu/flat/64.png
214SamoaOceaniahttps://www.countryflags.io/ws/flat/64.png
\n

215 rows \u00d7 3 columns

\n
\n\n\n\n\n```python\ndata_sources['city_population_in_time']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
namegroupyearvaluesubGroupcity_idlastValuelatlon
0AgraIndia1575200.0IndiaAgra - India200.027.1833378.01667
1AgraIndia1576212.0IndiaAgra - India200.027.1833378.01667
2AgraIndia1577224.0IndiaAgra - India212.027.1833378.01667
3AgraIndia1578236.0IndiaAgra - India224.027.1833378.01667
4AgraIndia1579248.0IndiaAgra - India236.027.1833378.01667
..............................
6247VijayanagarIndia1561480.0IndiaVijayanagar - India480.015.3350076.46200
6248VijayanagarIndia1562480.0IndiaVijayanagar - India480.015.3350076.46200
6249VijayanagarIndia1563480.0IndiaVijayanagar - India480.015.3350076.46200
6250VijayanagarIndia1564480.0IndiaVijayanagar - India480.015.3350076.46200
6251VijayanagarIndia1565480.0IndiaVijayanagar - India480.015.3350076.46200
\n

6252 rows \u00d7 9 columns

\n
\n\n\n\n\n```python\nz = ZippedCsvs(data_sources['kaggle_coronavirus_dataset'])\nlist(z)\n```\n", "long_description_content_type": "text/markdown", "description_file": "README.md", "root_url": "https://github.com/thorwhalen", "description": "A medley of things that got coded because there was an itch to do so", "author": "thorwhalen", "license": "Apache Software License", "description-file": "README.md", "install_requires": [], "keywords": [ "documentation", "packaging", "publishing" ] }/usr/lib/python3.13/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'description_file' warnings.warn(msg) /usr/lib/python3.13/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'root_url' warnings.warn(msg) /usr/lib/python3.13/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'description-file' warnings.warn(msg) /usr/lib/python3.13/site-packages/setuptools/dist.py:476: SetuptoolsDeprecationWarning: Invalid dash-separated options !! ******************************************************************************** Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead. This deprecation is overdue, please update your project and remove deprecated calls to avoid build errors in the future. See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details. ******************************************************************************** !! opt = self.warn_dash_deprecation(opt, section) -------------------------------------------------------------------- running bdist_wheel running build running build_py creating build creating build/lib creating build/lib/tapyoca copying tapyoca/__init__.py -> build/lib/tapyoca creating build/lib/tapyoca/agglutination copying tapyoca/agglutination/__init__.py -> build/lib/tapyoca/agglutination copying tapyoca/agglutination/data_acquisition.py -> build/lib/tapyoca/agglutination copying tapyoca/agglutination/partitions.py -> build/lib/tapyoca/agglutination copying tapyoca/agglutination/py_names.py -> build/lib/tapyoca/agglutination creating build/lib/tapyoca/covid copying tapyoca/covid/__init__.py -> build/lib/tapyoca/covid copying tapyoca/covid/covid_bar_chart_race.py -> build/lib/tapyoca/covid creating build/lib/tapyoca/darpa copying tapyoca/darpa/__init__.py -> build/lib/tapyoca/darpa copying tapyoca/darpa/darpa.py -> build/lib/tapyoca/darpa creating build/lib/tapyoca/demonyms copying tapyoca/demonyms/__init__.py -> build/lib/tapyoca/demonyms copying tapyoca/demonyms/data_acquisition.py -> build/lib/tapyoca/demonyms creating build/lib/tapyoca/indexing_podcasts copying tapyoca/indexing_podcasts/__init__.py -> build/lib/tapyoca/indexing_podcasts copying tapyoca/indexing_podcasts/prep.py -> build/lib/tapyoca/indexing_podcasts creating build/lib/tapyoca/parquet_deformations copying tapyoca/parquet_deformations/__init__.py -> build/lib/tapyoca/parquet_deformations copying tapyoca/parquet_deformations/parquet_deformations.py -> build/lib/tapyoca/parquet_deformations copying tapyoca/parquet_deformations/py_fonts.py -> build/lib/tapyoca/parquet_deformations creating build/lib/tapyoca/phoneming copying tapyoca/phoneming/__init__.py -> build/lib/tapyoca/phoneming copying tapyoca/phoneming/explore.py -> build/lib/tapyoca/phoneming running egg_info writing tapyoca.egg-info/PKG-INFO writing dependency_links to tapyoca.egg-info/dependency_links.txt writing top-level names to tapyoca.egg-info/top_level.txt reading manifest file 'tapyoca.egg-info/SOURCES.txt' adding license file 'LICENSE' writing manifest file 'tapyoca.egg-info/SOURCES.txt' installing to build/bdist.linux-x86_64/wheel running install running install_lib creating build/bdist.linux-x86_64 creating build/bdist.linux-x86_64/wheel creating build/bdist.linux-x86_64/wheel/tapyoca copying build/lib/tapyoca/__init__.py -> build/bdist.linux-x86_64/wheel/tapyoca creating build/bdist.linux-x86_64/wheel/tapyoca/agglutination copying build/lib/tapyoca/agglutination/__init__.py -> build/bdist.linux-x86_64/wheel/tapyoca/agglutination copying build/lib/tapyoca/agglutination/data_acquisition.py -> build/bdist.linux-x86_64/wheel/tapyoca/agglutination copying build/lib/tapyoca/agglutination/partitions.py -> build/bdist.linux-x86_64/wheel/tapyoca/agglutination copying build/lib/tapyoca/agglutination/py_names.py -> build/bdist.linux-x86_64/wheel/tapyoca/agglutination creating build/bdist.linux-x86_64/wheel/tapyoca/covid copying build/lib/tapyoca/covid/__init__.py -> build/bdist.linux-x86_64/wheel/tapyoca/covid copying build/lib/tapyoca/covid/covid_bar_chart_race.py -> build/bdist.linux-x86_64/wheel/tapyoca/covid creating build/bdist.linux-x86_64/wheel/tapyoca/darpa copying build/lib/tapyoca/darpa/__init__.py -> build/bdist.linux-x86_64/wheel/tapyoca/darpa copying build/lib/tapyoca/darpa/darpa.py -> build/bdist.linux-x86_64/wheel/tapyoca/darpa creating build/bdist.linux-x86_64/wheel/tapyoca/demonyms copying build/lib/tapyoca/demonyms/__init__.py -> build/bdist.linux-x86_64/wheel/tapyoca/demonyms copying build/lib/tapyoca/demonyms/data_acquisition.py -> build/bdist.linux-x86_64/wheel/tapyoca/demonyms creating build/bdist.linux-x86_64/wheel/tapyoca/indexing_podcasts copying build/lib/tapyoca/indexing_podcasts/__init__.py -> build/bdist.linux-x86_64/wheel/tapyoca/indexing_podcasts copying build/lib/tapyoca/indexing_podcasts/prep.py -> build/bdist.linux-x86_64/wheel/tapyoca/indexing_podcasts creating build/bdist.linux-x86_64/wheel/tapyoca/parquet_deformations copying build/lib/tapyoca/parquet_deformations/__init__.py -> build/bdist.linux-x86_64/wheel/tapyoca/parquet_deformations copying build/lib/tapyoca/parquet_deformations/parquet_deformations.py -> build/bdist.linux-x86_64/wheel/tapyoca/parquet_deformations copying build/lib/tapyoca/parquet_deformations/py_fonts.py -> build/bdist.linux-x86_64/wheel/tapyoca/parquet_deformations creating build/bdist.linux-x86_64/wheel/tapyoca/phoneming copying build/lib/tapyoca/phoneming/__init__.py -> build/bdist.linux-x86_64/wheel/tapyoca/phoneming copying build/lib/tapyoca/phoneming/explore.py -> build/bdist.linux-x86_64/wheel/tapyoca/phoneming running install_egg_info Copying tapyoca.egg-info to build/bdist.linux-x86_64/wheel/tapyoca-0.0.4-py3.13.egg-info running install_scripts creating build/bdist.linux-x86_64/wheel/tapyoca-0.0.4.dist-info/WHEEL creating '/builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/.pyproject-builddir/pip-wheel-31ti6nph/.tmp-vers3uzr/tapyoca-0.0.4-py3-none-any.whl' and adding 'build/bdist.linux-x86_64/wheel' to it adding 'tapyoca/__init__.py' adding 'tapyoca/agglutination/__init__.py' adding 'tapyoca/agglutination/data_acquisition.py' adding 'tapyoca/agglutination/partitions.py' adding 'tapyoca/agglutination/py_names.py' adding 'tapyoca/covid/__init__.py' adding 'tapyoca/covid/covid_bar_chart_race.py' adding 'tapyoca/darpa/__init__.py' adding 'tapyoca/darpa/darpa.py' adding 'tapyoca/demonyms/__init__.py' adding 'tapyoca/demonyms/data_acquisition.py' adding 'tapyoca/indexing_podcasts/__init__.py' adding 'tapyoca/indexing_podcasts/prep.py' adding 'tapyoca/parquet_deformations/__init__.py' adding 'tapyoca/parquet_deformations/parquet_deformations.py' adding 'tapyoca/parquet_deformations/py_fonts.py' adding 'tapyoca/phoneming/__init__.py' adding 'tapyoca/phoneming/explore.py' adding 'tapyoca-0.0.4.dist-info/LICENSE' adding 'tapyoca-0.0.4.dist-info/METADATA' adding 'tapyoca-0.0.4.dist-info/WHEEL' adding 'tapyoca-0.0.4.dist-info/top_level.txt' adding 'tapyoca-0.0.4.dist-info/RECORD' removing build/bdist.linux-x86_64/wheel Building wheel for tapyoca (pyproject.toml): finished with status 'done' Created wheel for tapyoca: filename=tapyoca-0.0.4-py3-none-any.whl size=75869 sha256=115c0119420f6bd389f68b9247b0bc2ed5b484d22dfd2198af5536d4fc3ddb72 Stored in directory: /builddir/.cache/pip/wheels/22/4a/48/94112b8bf255c216f8a8adeee40369fb497047380288ec3da8 Successfully built tapyoca + RPM_EC=0 ++ jobs -p + exit 0 Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.ypRL9R + umask 022 + cd /builddir/build/BUILD/python-tapyoca-0.0.4-build + '[' /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT '!=' / ']' + rm -rf /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT ++ dirname /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT + mkdir -p /builddir/build/BUILD/python-tapyoca-0.0.4-build + mkdir /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT + CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CFLAGS + CXXFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CXXFLAGS + FFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FFLAGS + FCFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FCFLAGS + VALAFLAGS=-g + export VALAFLAGS + RUSTFLAGS='-Copt-level=3 -Cdebuginfo=2 -Ccodegen-units=1 -Cstrip=none -Cforce-frame-pointers=yes -Clink-arg=-specs=/usr/lib/rpm/redhat/redhat-package-notes --cap-lints=warn' + export RUSTFLAGS + LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes ' + export LDFLAGS + LT_SYS_LIBRARY_PATH=/usr/lib64: + export LT_SYS_LIBRARY_PATH + CC=gcc + export CC + CXX=g++ + export CXX + cd tapyoca-0.0.4 ++ ls /builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/pyproject-wheeldir/tapyoca-0.0.4-py3-none-any.whl ++ xargs basename --multiple ++ sed -E 's/([^-]+)-([^-]+)-.+\.whl/\1==\2/' + specifier=tapyoca==0.0.4 + '[' -z tapyoca==0.0.4 ']' + TMPDIR=/builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/.pyproject-builddir + /usr/bin/python3 -m pip install --root /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT --prefix /usr --no-deps --disable-pip-version-check --progress-bar off --verbose --ignore-installed --no-warn-script-location --no-index --no-cache-dir --find-links /builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/pyproject-wheeldir tapyoca==0.0.4 Using pip 24.2 from /usr/lib/python3.13/site-packages/pip (python 3.13) Looking in links: /builddir/build/BUILD/python-tapyoca-0.0.4-build/tapyoca-0.0.4/pyproject-wheeldir Processing ./pyproject-wheeldir/tapyoca-0.0.4-py3-none-any.whl Installing collected packages: tapyoca Successfully installed tapyoca-0.0.4 + '[' -d /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/bin ']' + rm -f /builddir/build/BUILD/python-tapyoca-0.0.4-build/python-tapyoca-0.0.4-1.fc41.x86_64-pyproject-ghost-distinfo + site_dirs=() + '[' -d /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages ']' + site_dirs+=("/usr/lib/python3.13/site-packages") + '[' /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib64/python3.13/site-packages '!=' /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages ']' + '[' -d /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib64/python3.13/site-packages ']' + for site_dir in ${site_dirs[@]} + for distinfo in /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT$site_dir/*.dist-info + echo '%ghost /usr/lib/python3.13/site-packages/tapyoca-0.0.4.dist-info' + sed -i s/pip/rpm/ /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca-0.0.4.dist-info/INSTALLER + PYTHONPATH=/usr/lib/rpm/redhat + /usr/bin/python3 -B /usr/lib/rpm/redhat/pyproject_preprocess_record.py --buildroot /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT --record /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca-0.0.4.dist-info/RECORD --output /builddir/build/BUILD/python-tapyoca-0.0.4-build/python-tapyoca-0.0.4-1.fc41.x86_64-pyproject-record + rm -fv /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca-0.0.4.dist-info/RECORD removed '/builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca-0.0.4.dist-info/RECORD' + rm -fv /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca-0.0.4.dist-info/REQUESTED removed '/builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca-0.0.4.dist-info/REQUESTED' ++ wc -l /builddir/build/BUILD/python-tapyoca-0.0.4-build/python-tapyoca-0.0.4-1.fc41.x86_64-pyproject-ghost-distinfo ++ cut -f1 '-d ' + lines=1 + '[' 1 -ne 1 ']' + RPM_FILES_ESCAPE=4.19 + /usr/bin/python3 /usr/lib/rpm/redhat/pyproject_save_files.py --output-files /builddir/build/BUILD/python-tapyoca-0.0.4-build/python-tapyoca-0.0.4-1.fc41.x86_64-pyproject-files --output-modules /builddir/build/BUILD/python-tapyoca-0.0.4-build/python-tapyoca-0.0.4-1.fc41.x86_64-pyproject-modules --buildroot /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT --sitelib /usr/lib/python3.13/site-packages --sitearch /usr/lib64/python3.13/site-packages --python-version 3.13 --pyproject-record /builddir/build/BUILD/python-tapyoca-0.0.4-build/python-tapyoca-0.0.4-1.fc41.x86_64-pyproject-record --prefix /usr '*' +auto + /usr/lib/rpm/check-buildroot + /usr/lib/rpm/redhat/brp-ldconfig + /usr/lib/rpm/brp-compress + /usr/lib/rpm/brp-strip /usr/bin/strip + /usr/lib/rpm/brp-strip-comment-note /usr/bin/strip /usr/bin/objdump + /usr/lib/rpm/redhat/brp-strip-lto /usr/bin/strip + /usr/lib/rpm/brp-strip-static-archive /usr/bin/strip + /usr/lib/rpm/check-rpaths + /usr/lib/rpm/redhat/brp-mangle-shebangs + /usr/lib/rpm/brp-remove-la-files + env /usr/lib/rpm/redhat/brp-python-bytecompile '' 1 0 -j4 Bytecompiling .py files below /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13 using python3.13 /usr/lib/python3.13/site-packages/tapyoca/darpa/darpa.py:87: SyntaxWarning: invalid escape sequence '\d' /usr/lib/python3.13/site-packages/tapyoca/darpa/darpa.py:87: SyntaxWarning: invalid escape sequence '\d' /usr/lib/python3.13/site-packages/tapyoca/covid/covid_bar_chart_race.py:127: SyntaxWarning: invalid escape sequence '\d' /usr/lib/python3.13/site-packages/tapyoca/demonyms/data_acquisition.py:17: SyntaxWarning: invalid escape sequence '\w' /usr/lib/python3.13/site-packages/tapyoca/demonyms/data_acquisition.py:59: SyntaxWarning: invalid escape sequence '\ ' /usr/lib/python3.13/site-packages/tapyoca/demonyms/data_acquisition.py:88: SyntaxWarning: invalid escape sequence '\w' /usr/lib/python3.13/site-packages/tapyoca/covid/covid_bar_chart_race.py:127: SyntaxWarning: invalid escape sequence '\d' /usr/lib/python3.13/site-packages/tapyoca/demonyms/data_acquisition.py:184: SyntaxWarning: invalid escape sequence '\ ' /usr/lib/python3.13/site-packages/tapyoca/demonyms/data_acquisition.py:17: SyntaxWarning: invalid escape sequence '\w' /usr/lib/python3.13/site-packages/tapyoca/demonyms/data_acquisition.py:59: SyntaxWarning: invalid escape sequence '\ ' /usr/lib/python3.13/site-packages/tapyoca/demonyms/data_acquisition.py:88: SyntaxWarning: invalid escape sequence '\w' /usr/lib/python3.13/site-packages/tapyoca/demonyms/data_acquisition.py:184: SyntaxWarning: invalid escape sequence '\ ' + /usr/lib/rpm/redhat/brp-python-hardlink + /usr/bin/add-determinism --brp -j4 /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca/agglutination/__pycache__/__init__.cpython-313.pyc: rewriting with normalized contents /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca/agglutination/__pycache__/data_acquisition.cpython-313.pyc: rewriting with normalized contents /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca/agglutination/__pycache__/partitions.cpython-313.pyc: rewriting with normalized contents /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca/covid/__pycache__/__init__.cpython-313.pyc: rewriting with normalized contents /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca/agglutination/__pycache__/py_names.cpython-313.pyc: rewriting with normalized contents /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca/darpa/__pycache__/__init__.cpython-313.pyc: rewriting with normalized contents /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca/covid/__pycache__/covid_bar_chart_race.cpython-313.pyc: replacing with normalized version /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca/darpa/__pycache__/darpa.cpython-313.pyc: rewriting with normalized contents /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca/demonyms/__pycache__/__init__.cpython-313.pyc: rewriting with normalized contents /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca/covid/__pycache__/covid_bar_chart_race.cpython-313.opt-1.pyc: replacing with normalized version /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca/indexing_podcasts/__pycache__/__init__.cpython-313.pyc: rewriting with normalized contents /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca/indexing_podcasts/__pycache__/prep.cpython-313.pyc: rewriting with normalized contents /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca/parquet_deformations/__pycache__/__init__.cpython-313.pyc: rewriting with normalized contents /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca/demonyms/__pycache__/data_acquisition.cpython-313.opt-1.pyc: replacing with normalized version /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca/demonyms/__pycache__/data_acquisition.cpython-313.pyc: replacing with normalized version /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca/phoneming/__pycache__/__init__.cpython-313.pyc: rewriting with normalized contents /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca/parquet_deformations/__pycache__/py_fonts.cpython-313.pyc: rewriting with normalized contents /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca/phoneming/__pycache__/explore.cpython-313.pyc: rewriting with normalized contents /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca/__pycache__/__init__.cpython-313.pyc: rewriting with normalized contents /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca/parquet_deformations/__pycache__/parquet_deformations.cpython-313.pyc: replacing with normalized version /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages/tapyoca/parquet_deformations/__pycache__/parquet_deformations.cpython-313.opt-1.pyc: replacing with normalized version Scanned 22 directories and 59 files, processed 21 inodes, 21 modified (6 replaced + 15 rewritten), 0 unsupported format, 0 errors Executing(%check): /bin/sh -e /var/tmp/rpm-tmp.y18ufC + umask 022 + cd /builddir/build/BUILD/python-tapyoca-0.0.4-build + CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CFLAGS + CXXFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CXXFLAGS + FFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FFLAGS + FCFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FCFLAGS + VALAFLAGS=-g + export VALAFLAGS + RUSTFLAGS='-Copt-level=3 -Cdebuginfo=2 -Ccodegen-units=1 -Cstrip=none -Cforce-frame-pointers=yes -Clink-arg=-specs=/usr/lib/rpm/redhat/redhat-package-notes --cap-lints=warn' + export RUSTFLAGS + LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes ' + export LDFLAGS + LT_SYS_LIBRARY_PATH=/usr/lib64: + export LT_SYS_LIBRARY_PATH + CC=gcc + export CC + CXX=g++ + export CXX + cd tapyoca-0.0.4 ++ cat /builddir/build/BUILD/python-tapyoca-0.0.4-build/python-tapyoca-0.0.4-1.fc41.x86_64-pyproject-modules + '[' -z 'tapyoca tapyoca.agglutination tapyoca.agglutination.data_acquisition tapyoca.agglutination.partitions tapyoca.agglutination.py_names tapyoca.covid tapyoca.covid.covid_bar_chart_race tapyoca.darpa tapyoca.darpa.darpa tapyoca.demonyms tapyoca.demonyms.data_acquisition tapyoca.indexing_podcasts tapyoca.indexing_podcasts.prep tapyoca.parquet_deformations tapyoca.parquet_deformations.parquet_deformations tapyoca.parquet_deformations.py_fonts tapyoca.phoneming tapyoca.phoneming.explore' ']' + '[' '!' -f /builddir/build/BUILD/python-tapyoca-0.0.4-build/python-tapyoca-0.0.4-1.fc41.x86_64-pyproject-modules ']' + PATH=/builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/sbin + PYTHONPATH=/builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib64/python3.13/site-packages:/builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages + _PYTHONSITE=/builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib64/python3.13/site-packages:/builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT/usr/lib/python3.13/site-packages + PYTHONDONTWRITEBYTECODE=1 + /usr/bin/python3 -sP /usr/lib/rpm/redhat/import_all_modules.py -f /builddir/build/BUILD/python-tapyoca-0.0.4-build/python-tapyoca-0.0.4-1.fc41.x86_64-pyproject-modules -t Check import: tapyoca + RPM_EC=0 ++ jobs -p + exit 0 Processing files: python3-tapyoca-0.0.4-1.fc41.noarch Provides: python-tapyoca = 0.0.4-1.fc41 python3-tapyoca = 0.0.4-1.fc41 python3.13-tapyoca = 0.0.4-1.fc41 python3.13dist(tapyoca) = 0.0.4 python3dist(tapyoca) = 0.0.4 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PartialHardlinkSets) <= 4.0.4-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires: python(abi) = 3.13 Checking for unpackaged file(s): /usr/lib/rpm/check-files /builddir/build/BUILD/python-tapyoca-0.0.4-build/BUILDROOT Wrote: /builddir/build/SRPMS/python-tapyoca-0.0.4-1.fc41.src.rpm Wrote: /builddir/build/RPMS/python3-tapyoca-0.0.4-1.fc41.noarch.rpm Executing(rmbuild): /bin/sh -e /var/tmp/rpm-tmp.PO2xYy + umask 022 + cd /builddir/build/BUILD/python-tapyoca-0.0.4-build + test -d /builddir/build/BUILD/python-tapyoca-0.0.4-build + /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w /builddir/build/BUILD/python-tapyoca-0.0.4-build + rm -rf /builddir/build/BUILD/python-tapyoca-0.0.4-build + RPM_EC=0 ++ jobs -p + exit 0 Finish: rpmbuild python-tapyoca-0.0.4-1.fc41.src.rpm Finish: build phase for python-tapyoca-0.0.4-1.fc41.src.rpm INFO: chroot_scan: 1 files copied to /var/lib/copr-rpmbuild/results/chroot_scan INFO: /var/lib/mock/fedora-41-x86_64-1740863228.713984/root/var/log/dnf5.log INFO: chroot_scan: creating tarball /var/lib/copr-rpmbuild/results/chroot_scan.tar.gz /bin/tar: Removing leading `/' from member names INFO: Done(/var/lib/copr-rpmbuild/results/python-tapyoca-0.0.4-1.fc41.src.rpm) Config(child) 0 minutes 13 seconds INFO: Results and/or logs in: /var/lib/copr-rpmbuild/results INFO: Cleaning up build root ('cleanup_on_success=True') Start: clean chroot INFO: unmounting tmpfs. Finish: clean chroot Finish: run Running RPMResults tool Package info: { "packages": [ { "name": "python-tapyoca", "epoch": null, "version": "0.0.4", "release": "1.fc41", "arch": "src" }, { "name": "python3-tapyoca", "epoch": null, "version": "0.0.4", "release": "1.fc41", "arch": "noarch" } ] } RPMResults finished