Warning: Permanently added '2620:52:3:1:dead:beef:cafe:c117' (ED25519) to the list of known hosts. You can reproduce this build on your computer by running: sudo dnf install copr-rpmbuild /usr/bin/copr-rpmbuild --verbose --drop-resultdir --task-url https://copr.fedorainfracloud.org/backend/get-build-task/9691632-fedora-42-x86_64 --chroot fedora-42-x86_64 Version: 1.6 PID: 2328 Logging PID: 2330 Task: {'allow_user_ssh': False, 'appstream': False, 'background': False, 'build_id': 9691632, 'buildroot_pkgs': [], 'chroot': 'fedora-42-x86_64', 'enable_net': True, 'fedora_review': True, 'git_hash': '38cf0b693158314ff48ad2d48d54aecb378572e9', 'git_repo': 'https://copr-dist-git.fedorainfracloud.org/git/mwprado/ollama-cuda/ollama', 'isolation': 'default', 'memory_reqs': 2048, 'package_name': 'ollama', 'package_version': '0.12.5-1', 'project_dirname': 'ollama-cuda', 'project_name': 'ollama-cuda', 'project_owner': 'mwprado', 'repo_priority': None, 'repos': [{'baseurl': 'https://download.copr.fedorainfracloud.org/results/mwprado/ollama-cuda/fedora-42-x86_64/', 'id': 'copr_base', 'name': 'Copr repository', 'priority': None}, {'baseurl': 'https://developer.download.nvidia.com/compute/cuda/repos/fedora41/x86_64/', 'id': 'https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64', 'name': 'Additional repo https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64'}, {'baseurl': 'https://developer.download.nvidia.com/compute/cuda/repos/fedora42/x86_64/', 'id': 'https_developer_download_nvidia_com_compute_cuda_repos_fedora42_x86_64', 'name': 'Additional repo https_developer_download_nvidia_com_compute_cuda_repos_fedora42_x86_64'}], 'sandbox': 'mwprado/ollama-cuda--mwprado', 'source_json': {}, 'source_type': None, 'ssh_public_keys': None, 'storage': 0, 'submitter': 'mwprado', 'tags': [], 'task_id': '9691632-fedora-42-x86_64', 'timeout': 18000, 'uses_devel_repo': False, 'with_opts': [], 'without_opts': []} Running: git clone https://copr-dist-git.fedorainfracloud.org/git/mwprado/ollama-cuda/ollama /var/lib/copr-rpmbuild/workspace/workdir-l63xe8vc/ollama --depth 500 --no-single-branch --recursive cmd: ['git', 'clone', 'https://copr-dist-git.fedorainfracloud.org/git/mwprado/ollama-cuda/ollama', '/var/lib/copr-rpmbuild/workspace/workdir-l63xe8vc/ollama', '--depth', '500', '--no-single-branch', '--recursive'] cwd: . rc: 0 stdout: stderr: Cloning into '/var/lib/copr-rpmbuild/workspace/workdir-l63xe8vc/ollama'... Running: git checkout 38cf0b693158314ff48ad2d48d54aecb378572e9 -- cmd: ['git', 'checkout', '38cf0b693158314ff48ad2d48d54aecb378572e9', '--'] cwd: /var/lib/copr-rpmbuild/workspace/workdir-l63xe8vc/ollama rc: 0 stdout: stderr: Note: switching to '38cf0b693158314ff48ad2d48d54aecb378572e9'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by switching back to a branch. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -c with the switch command. Example: git switch -c Or undo this operation with: git switch - Turn off this advice by setting config variable advice.detachedHead to false HEAD is now at 38cf0b6 automatic import of ollama Running: dist-git-client sources cmd: ['dist-git-client', 'sources'] cwd: /var/lib/copr-rpmbuild/workspace/workdir-l63xe8vc/ollama rc: 0 stdout: stderr: INFO: Reading stdout from command: git rev-parse --abbrev-ref HEAD INFO: Reading stdout from command: git rev-parse HEAD INFO: Reading sources specification file: sources INFO: Downloading main.zip INFO: Reading stdout from command: curl --help all INFO: Calling: curl -H Pragma: -o main.zip --location --connect-timeout 60 --retry 3 --retry-delay 10 --remote-time --show-error --fail --retry-all-errors https://copr-dist-git.fedorainfracloud.org/repo/pkgs/mwprado/ollama-cuda/ollama/main.zip/md5/10f1829f38996ba9bc665c93dd9d9624/main.zip % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 4595 100 4595 0 0 147k 0 --:--:-- --:--:-- --:--:-- 149k INFO: Reading stdout from command: md5sum main.zip INFO: Downloading v0.12.5.zip INFO: Calling: curl -H Pragma: -o v0.12.5.zip --location --connect-timeout 60 --retry 3 --retry-delay 10 --remote-time --show-error --fail --retry-all-errors https://copr-dist-git.fedorainfracloud.org/repo/pkgs/mwprado/ollama-cuda/ollama/v0.12.5.zip/md5/9ade9ff7f51e2daafce16888804ebb00/v0.12.5.zip % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 10.9M 100 10.9M 0 0 53.7M 0 --:--:-- --:--:-- --:--:-- 53.5M INFO: Reading stdout from command: md5sum v0.12.5.zip tail: /var/lib/copr-rpmbuild/main.log: file truncated Running (timeout=18000): unbuffer mock --spec /var/lib/copr-rpmbuild/workspace/workdir-l63xe8vc/ollama/ollama.spec --sources /var/lib/copr-rpmbuild/workspace/workdir-l63xe8vc/ollama --resultdir /var/lib/copr-rpmbuild/results --uniqueext 1760542611.653638 -r /var/lib/copr-rpmbuild/results/configs/child.cfg INFO: mock.py version 6.3 starting (python version = 3.13.7, NVR = mock-6.3-1.fc42), args: /usr/libexec/mock/mock --spec /var/lib/copr-rpmbuild/workspace/workdir-l63xe8vc/ollama/ollama.spec --sources /var/lib/copr-rpmbuild/workspace/workdir-l63xe8vc/ollama --resultdir /var/lib/copr-rpmbuild/results --uniqueext 1760542611.653638 -r /var/lib/copr-rpmbuild/results/configs/child.cfg Start(bootstrap): init plugins INFO: tmpfs initialized INFO: selinux enabled INFO: chroot_scan: initialized INFO: compress_logs: initialized Finish(bootstrap): init plugins Start: init plugins INFO: tmpfs initialized INFO: selinux enabled INFO: chroot_scan: initialized INFO: compress_logs: initialized Finish: init plugins INFO: Signal handler active Start: run INFO: Start(/var/lib/copr-rpmbuild/workspace/workdir-l63xe8vc/ollama/ollama.spec) Config(fedora-42-x86_64) Start: clean chroot Finish: clean chroot Mock Version: 6.3 INFO: Mock Version: 6.3 Start(bootstrap): chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-42-x86_64-bootstrap-1760542611.653638/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled package manager cache Start(bootstrap): cleaning package manager metadata Finish(bootstrap): cleaning package manager metadata INFO: Guessed host environment type: unknown INFO: Using container image: registry.fedoraproject.org/fedora:42 INFO: Pulling image: registry.fedoraproject.org/fedora:42 INFO: Tagging container image as mock-bootstrap-35a55024-92f3-4d29-aa47-5152ee240ede INFO: Checking that 9d3b3c734d39d6b6e64ba9b992f0ffd21d824a30df6cadc3c631f8a3e78739c8 image matches host's architecture INFO: Copy content of container 9d3b3c734d39d6b6e64ba9b992f0ffd21d824a30df6cadc3c631f8a3e78739c8 to /var/lib/mock/fedora-42-x86_64-bootstrap-1760542611.653638/root INFO: mounting 9d3b3c734d39d6b6e64ba9b992f0ffd21d824a30df6cadc3c631f8a3e78739c8 with podman image mount INFO: image 9d3b3c734d39d6b6e64ba9b992f0ffd21d824a30df6cadc3c631f8a3e78739c8 as /var/lib/containers/storage/overlay/40fc70a6408df127073006d89de6710074d0a91015acf3589940bd98ee2ebb9d/merged INFO: umounting image 9d3b3c734d39d6b6e64ba9b992f0ffd21d824a30df6cadc3c631f8a3e78739c8 (/var/lib/containers/storage/overlay/40fc70a6408df127073006d89de6710074d0a91015acf3589940bd98ee2ebb9d/merged) with podman image umount INFO: Removing image mock-bootstrap-35a55024-92f3-4d29-aa47-5152ee240ede INFO: Package manager dnf5 detected and used (fallback) INFO: Not updating bootstrap chroot, bootstrap_image_ready=True Start(bootstrap): creating root cache Finish(bootstrap): creating root cache Finish(bootstrap): chroot init Start: chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-42-x86_64-1760542611.653638/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled package manager cache Start: cleaning package manager metadata Finish: cleaning package manager metadata INFO: enabled HW Info plugin INFO: Package manager dnf5 detected and used (direct choice) INFO: Buildroot is handled by package management downloaded with a bootstrap image: rpm-4.20.1-1.fc42.x86_64 rpm-sequoia-1.7.0-5.fc42.x86_64 dnf5-5.2.16.0-1.fc42.x86_64 dnf5-plugins-5.2.16.0-1.fc42.x86_64 Start: installing minimal buildroot with dnf5 Updating and loading repositories: Copr repository 100% | 16.1 KiB/s | 5.1 KiB | 00m00s Additional repo https_developer_downlo 100% | 151.8 KiB/s | 50.1 KiB | 00m00s Additional repo https_developer_downlo 100% | 226.1 KiB/s | 109.0 KiB | 00m00s updates 100% | 15.4 MiB/s | 26.6 MiB | 00m02s fedora 100% | 11.8 MiB/s | 35.4 MiB | 00m03s Repositories loaded. Package Arch Version Repository Size Installing group/module packages: bash x86_64 5.2.37-1.fc42 fedora 8.2 MiB bzip2 x86_64 1.0.8-20.fc42 fedora 99.3 KiB coreutils x86_64 9.6-6.fc42 updates 5.4 MiB cpio x86_64 2.15-4.fc42 fedora 1.1 MiB diffutils x86_64 3.12-1.fc42 updates 1.6 MiB fedora-release-common noarch 42-30 updates 20.2 KiB findutils x86_64 1:4.10.0-5.fc42 fedora 1.9 MiB gawk x86_64 5.3.1-1.fc42 fedora 1.7 MiB glibc-minimal-langpack x86_64 2.41-11.fc42 updates 0.0 B grep x86_64 3.11-10.fc42 fedora 1.0 MiB gzip x86_64 1.13-3.fc42 fedora 392.9 KiB info x86_64 7.2-3.fc42 fedora 357.9 KiB patch x86_64 2.8-1.fc42 updates 222.8 KiB redhat-rpm-config noarch 342-4.fc42 updates 185.5 KiB rpm-build x86_64 4.20.1-1.fc42 fedora 168.7 KiB sed x86_64 4.9-4.fc42 fedora 857.3 KiB shadow-utils x86_64 2:4.17.4-1.fc42 fedora 4.0 MiB tar x86_64 2:1.35-5.fc42 fedora 3.0 MiB unzip x86_64 6.0-66.fc42 fedora 390.3 KiB util-linux x86_64 2.40.4-7.fc42 fedora 3.4 MiB which x86_64 2.23-2.fc42 updates 83.5 KiB xz x86_64 1:5.8.1-2.fc42 updates 1.3 MiB Installing dependencies: add-determinism x86_64 0.6.0-1.fc42 fedora 2.5 MiB alternatives x86_64 1.33-1.fc42 updates 62.2 KiB ansible-srpm-macros noarch 1-17.1.fc42 fedora 35.7 KiB audit-libs x86_64 4.1.1-1.fc42 updates 378.8 KiB basesystem noarch 11-22.fc42 fedora 0.0 B binutils x86_64 2.44-6.fc42 updates 25.8 MiB build-reproducibility-srpm-macros noarch 0.6.0-1.fc42 fedora 735.0 B bzip2-libs x86_64 1.0.8-20.fc42 fedora 84.6 KiB ca-certificates noarch 2025.2.80_v9.0.304-1.0.fc42 updates 2.7 MiB coreutils-common x86_64 9.6-6.fc42 updates 11.1 MiB crypto-policies noarch 20250707-1.gitad370a8.fc42 updates 142.9 KiB curl x86_64 8.11.1-6.fc42 updates 450.6 KiB cyrus-sasl-lib x86_64 2.1.28-30.fc42 fedora 2.3 MiB debugedit x86_64 5.1-7.fc42 updates 192.7 KiB dwz x86_64 0.16-1.fc42 updates 287.1 KiB ed x86_64 1.21-2.fc42 fedora 146.5 KiB efi-srpm-macros noarch 6-3.fc42 updates 40.1 KiB elfutils x86_64 0.193-2.fc42 updates 2.9 MiB elfutils-debuginfod-client x86_64 0.193-2.fc42 updates 83.9 KiB elfutils-default-yama-scope noarch 0.193-2.fc42 updates 1.8 KiB elfutils-libelf x86_64 0.193-2.fc42 updates 1.2 MiB elfutils-libs x86_64 0.193-2.fc42 updates 683.4 KiB fedora-gpg-keys noarch 42-1 fedora 128.2 KiB fedora-release noarch 42-30 updates 0.0 B fedora-release-identity-basic noarch 42-30 updates 646.0 B fedora-repos noarch 42-1 fedora 4.9 KiB file x86_64 5.46-3.fc42 updates 100.2 KiB file-libs x86_64 5.46-3.fc42 updates 11.9 MiB filesystem x86_64 3.18-47.fc42 updates 112.0 B filesystem-srpm-macros noarch 3.18-47.fc42 updates 38.2 KiB fonts-srpm-macros noarch 1:2.0.5-22.fc42 updates 55.8 KiB forge-srpm-macros noarch 0.4.0-2.fc42 fedora 38.9 KiB fpc-srpm-macros noarch 1.3-14.fc42 fedora 144.0 B gdb-minimal x86_64 16.3-1.fc42 updates 13.2 MiB gdbm-libs x86_64 1:1.23-9.fc42 fedora 129.9 KiB ghc-srpm-macros noarch 1.9.2-2.fc42 fedora 779.0 B glibc x86_64 2.41-11.fc42 updates 6.6 MiB glibc-common x86_64 2.41-11.fc42 updates 1.0 MiB glibc-gconv-extra x86_64 2.41-11.fc42 updates 7.2 MiB gmp x86_64 1:6.3.0-4.fc42 fedora 811.3 KiB gnat-srpm-macros noarch 6-7.fc42 fedora 1.0 KiB gnulib-l10n noarch 20241231-1.fc42 updates 655.0 KiB go-srpm-macros noarch 3.8.0-1.fc42 updates 61.9 KiB jansson x86_64 2.14-2.fc42 fedora 93.1 KiB json-c x86_64 0.18-2.fc42 fedora 86.7 KiB kernel-srpm-macros noarch 1.0-25.fc42 fedora 1.9 KiB keyutils-libs x86_64 1.6.3-5.fc42 fedora 58.3 KiB krb5-libs x86_64 1.21.3-6.fc42 updates 2.3 MiB libacl x86_64 2.3.2-3.fc42 fedora 38.3 KiB libarchive x86_64 3.8.1-1.fc42 updates 955.2 KiB libattr x86_64 2.5.2-5.fc42 fedora 27.1 KiB libblkid x86_64 2.40.4-7.fc42 fedora 262.4 KiB libbrotli x86_64 1.1.0-6.fc42 fedora 841.3 KiB libcap x86_64 2.73-2.fc42 fedora 207.1 KiB libcap-ng x86_64 0.8.5-4.fc42 fedora 72.9 KiB libcom_err x86_64 1.47.2-3.fc42 fedora 67.1 KiB libcurl x86_64 8.11.1-6.fc42 updates 834.1 KiB libeconf x86_64 0.7.6-2.fc42 updates 64.6 KiB libevent x86_64 2.1.12-15.fc42 fedora 903.1 KiB libfdisk x86_64 2.40.4-7.fc42 fedora 372.3 KiB libffi x86_64 3.4.6-5.fc42 fedora 82.3 KiB libgcc x86_64 15.2.1-1.fc42 updates 266.6 KiB libgomp x86_64 15.2.1-1.fc42 updates 541.1 KiB libidn2 x86_64 2.3.8-1.fc42 fedora 556.5 KiB libmount x86_64 2.40.4-7.fc42 fedora 356.3 KiB libnghttp2 x86_64 1.64.0-3.fc42 fedora 170.4 KiB libpkgconf x86_64 2.3.0-2.fc42 fedora 78.1 KiB libpsl x86_64 0.21.5-5.fc42 fedora 76.4 KiB libselinux x86_64 3.8-3.fc42 updates 193.1 KiB libsemanage x86_64 3.8.1-2.fc42 updates 304.4 KiB libsepol x86_64 3.8-1.fc42 fedora 826.0 KiB libsmartcols x86_64 2.40.4-7.fc42 fedora 180.4 KiB libssh x86_64 0.11.3-1.fc42 updates 567.1 KiB libssh-config noarch 0.11.3-1.fc42 updates 277.0 B libstdc++ x86_64 15.2.1-1.fc42 updates 2.8 MiB libtasn1 x86_64 4.20.0-1.fc42 fedora 176.3 KiB libtool-ltdl x86_64 2.5.4-4.fc42 fedora 70.1 KiB libunistring x86_64 1.1-9.fc42 fedora 1.7 MiB libuuid x86_64 2.40.4-7.fc42 fedora 37.3 KiB libverto x86_64 0.3.2-10.fc42 fedora 25.4 KiB libxcrypt x86_64 4.4.38-7.fc42 updates 284.5 KiB libxml2 x86_64 2.12.10-1.fc42 fedora 1.7 MiB libzstd x86_64 1.5.7-1.fc42 fedora 807.8 KiB lua-libs x86_64 5.4.8-1.fc42 updates 280.8 KiB lua-srpm-macros noarch 1-15.fc42 fedora 1.3 KiB lz4-libs x86_64 1.10.0-2.fc42 fedora 157.4 KiB mpfr x86_64 4.2.2-1.fc42 fedora 828.8 KiB ncurses-base noarch 6.5-5.20250125.fc42 fedora 326.8 KiB ncurses-libs x86_64 6.5-5.20250125.fc42 fedora 946.3 KiB ocaml-srpm-macros noarch 10-4.fc42 fedora 1.9 KiB openblas-srpm-macros noarch 2-19.fc42 fedora 112.0 B openldap x86_64 2.6.10-1.fc42 updates 655.8 KiB openssl-libs x86_64 1:3.2.6-2.fc42 updates 7.8 MiB p11-kit x86_64 0.25.8-1.fc42 updates 2.3 MiB p11-kit-trust x86_64 0.25.8-1.fc42 updates 446.5 KiB package-notes-srpm-macros noarch 0.5-13.fc42 fedora 1.6 KiB pam-libs x86_64 1.7.0-6.fc42 updates 126.7 KiB pcre2 x86_64 10.45-1.fc42 fedora 697.7 KiB pcre2-syntax noarch 10.45-1.fc42 fedora 273.9 KiB perl-srpm-macros noarch 1-57.fc42 fedora 861.0 B pkgconf x86_64 2.3.0-2.fc42 fedora 88.5 KiB pkgconf-m4 noarch 2.3.0-2.fc42 fedora 14.4 KiB pkgconf-pkg-config x86_64 2.3.0-2.fc42 fedora 989.0 B popt x86_64 1.19-8.fc42 fedora 132.8 KiB publicsuffix-list-dafsa noarch 20250616-1.fc42 updates 69.1 KiB pyproject-srpm-macros noarch 1.18.4-1.fc42 updates 1.9 KiB python-srpm-macros noarch 3.13-5.fc42 updates 51.0 KiB qt5-srpm-macros noarch 5.15.17-1.fc42 updates 500.0 B qt6-srpm-macros noarch 6.9.2-1.fc42 updates 464.0 B readline x86_64 8.2-13.fc42 fedora 485.0 KiB rpm x86_64 4.20.1-1.fc42 fedora 3.1 MiB rpm-build-libs x86_64 4.20.1-1.fc42 fedora 206.6 KiB rpm-libs x86_64 4.20.1-1.fc42 fedora 721.8 KiB rpm-sequoia x86_64 1.7.0-5.fc42 fedora 2.4 MiB rust-srpm-macros noarch 26.4-1.fc42 updates 4.8 KiB setup noarch 2.15.0-13.fc42 fedora 720.9 KiB sqlite-libs x86_64 3.47.2-5.fc42 updates 1.5 MiB systemd-libs x86_64 257.9-2.fc42 updates 2.2 MiB systemd-standalone-sysusers x86_64 257.9-2.fc42 updates 277.3 KiB tree-sitter-srpm-macros noarch 0.1.0-8.fc42 fedora 6.5 KiB util-linux-core x86_64 2.40.4-7.fc42 fedora 1.4 MiB xxhash-libs x86_64 0.8.3-2.fc42 fedora 90.2 KiB xz-libs x86_64 1:5.8.1-2.fc42 updates 217.8 KiB zig-srpm-macros noarch 1-4.fc42 fedora 1.1 KiB zip x86_64 3.0-43.fc42 fedora 698.5 KiB zlib-ng-compat x86_64 2.2.5-2.fc42 updates 137.6 KiB zstd x86_64 1.5.7-1.fc42 fedora 1.7 MiB Installing groups: Buildsystem building group Transaction Summary: Installing: 149 packages Total size of inbound packages is 52 MiB. Need to download 52 MiB. After this operation, 178 MiB extra will be used (install 178 MiB, remove 0 B). [ 1/149] bzip2-0:1.0.8-20.fc42.x86_64 100% | 148.8 KiB/s | 52.1 KiB | 00m00s [ 2/149] cpio-0:2.15-4.fc42.x86_64 100% | 524.2 KiB/s | 294.6 KiB | 00m01s [ 3/149] grep-0:3.11-10.fc42.x86_64 100% | 1.8 MiB/s | 300.1 KiB | 00m00s [ 4/149] findutils-1:4.10.0-5.fc42.x86 100% | 1.4 MiB/s | 551.5 KiB | 00m00s [ 5/149] info-0:7.2-3.fc42.x86_64 100% | 2.1 MiB/s | 183.8 KiB | 00m00s [ 6/149] gzip-0:1.13-3.fc42.x86_64 100% | 1.8 MiB/s | 170.4 KiB | 00m00s [ 7/149] bash-0:5.2.37-1.fc42.x86_64 100% | 2.1 MiB/s | 1.8 MiB | 00m01s [ 8/149] rpm-build-0:4.20.1-1.fc42.x86 100% | 1.1 MiB/s | 81.8 KiB | 00m00s [ 9/149] sed-0:4.9-4.fc42.x86_64 100% | 2.8 MiB/s | 317.3 KiB | 00m00s [ 10/149] unzip-0:6.0-66.fc42.x86_64 100% | 2.0 MiB/s | 184.6 KiB | 00m00s [ 11/149] shadow-utils-2:4.17.4-1.fc42. 100% | 8.2 MiB/s | 1.3 MiB | 00m00s [ 12/149] tar-2:1.35-5.fc42.x86_64 100% | 4.0 MiB/s | 862.5 KiB | 00m00s [ 13/149] fedora-release-common-0:42-30 100% | 429.6 KiB/s | 24.5 KiB | 00m00s [ 14/149] diffutils-0:3.12-1.fc42.x86_6 100% | 2.3 MiB/s | 392.6 KiB | 00m00s [ 15/149] glibc-minimal-langpack-0:2.41 100% | 3.9 MiB/s | 98.7 KiB | 00m00s [ 16/149] coreutils-0:9.6-6.fc42.x86_64 100% | 4.9 MiB/s | 1.1 MiB | 00m00s [ 17/149] patch-0:2.8-1.fc42.x86_64 100% | 4.8 MiB/s | 113.5 KiB | 00m00s [ 18/149] gawk-0:5.3.1-1.fc42.x86_64 100% | 10.7 MiB/s | 1.1 MiB | 00m00s [ 19/149] redhat-rpm-config-0:342-4.fc4 100% | 4.0 MiB/s | 81.1 KiB | 00m00s [ 20/149] which-0:2.23-2.fc42.x86_64 100% | 2.1 MiB/s | 41.7 KiB | 00m00s [ 21/149] xz-1:5.8.1-2.fc42.x86_64 100% | 18.6 MiB/s | 572.6 KiB | 00m00s [ 22/149] ncurses-libs-0:6.5-5.20250125 100% | 4.2 MiB/s | 335.0 KiB | 00m00s [ 23/149] bzip2-libs-0:1.0.8-20.fc42.x8 100% | 605.2 KiB/s | 43.6 KiB | 00m00s [ 24/149] util-linux-0:2.40.4-7.fc42.x8 100% | 7.1 MiB/s | 1.2 MiB | 00m00s [ 25/149] pcre2-0:10.45-1.fc42.x86_64 100% | 3.4 MiB/s | 262.8 KiB | 00m00s [ 26/149] popt-0:1.19-8.fc42.x86_64 100% | 879.2 KiB/s | 65.9 KiB | 00m00s [ 27/149] readline-0:8.2-13.fc42.x86_64 100% | 2.8 MiB/s | 215.2 KiB | 00m00s [ 28/149] rpm-0:4.20.1-1.fc42.x86_64 100% | 6.4 MiB/s | 548.4 KiB | 00m00s [ 29/149] rpm-build-libs-0:4.20.1-1.fc4 100% | 1.2 MiB/s | 99.7 KiB | 00m00s [ 30/149] rpm-libs-0:4.20.1-1.fc42.x86_ 100% | 3.9 MiB/s | 312.0 KiB | 00m00s [ 31/149] libacl-0:2.3.2-3.fc42.x86_64 100% | 328.7 KiB/s | 23.0 KiB | 00m00s [ 32/149] zstd-0:1.5.7-1.fc42.x86_64 100% | 5.7 MiB/s | 485.9 KiB | 00m00s [ 33/149] setup-0:2.15.0-13.fc42.noarch 100% | 2.1 MiB/s | 155.8 KiB | 00m00s [ 34/149] coreutils-common-0:9.6-6.fc42 100% | 30.2 MiB/s | 2.1 MiB | 00m00s [ 35/149] gmp-1:6.3.0-4.fc42.x86_64 100% | 4.0 MiB/s | 317.7 KiB | 00m00s [ 36/149] libattr-0:2.5.2-5.fc42.x86_64 100% | 251.2 KiB/s | 17.1 KiB | 00m00s [ 37/149] libcap-0:2.73-2.fc42.x86_64 100% | 1.1 MiB/s | 84.3 KiB | 00m00s [ 38/149] fedora-repos-0:42-1.noarch 100% | 135.7 KiB/s | 9.2 KiB | 00m00s [ 39/149] glibc-common-0:2.41-11.fc42.x 100% | 14.5 MiB/s | 385.6 KiB | 00m00s [ 40/149] mpfr-0:4.2.2-1.fc42.x86_64 100% | 4.3 MiB/s | 345.3 KiB | 00m00s [ 41/149] ed-0:1.21-2.fc42.x86_64 100% | 1.1 MiB/s | 82.0 KiB | 00m00s [ 42/149] ansible-srpm-macros-0:1-17.1. 100% | 290.2 KiB/s | 20.3 KiB | 00m00s [ 43/149] build-reproducibility-srpm-ma 100% | 171.8 KiB/s | 11.7 KiB | 00m00s [ 44/149] forge-srpm-macros-0:0.4.0-2.f 100% | 291.9 KiB/s | 19.9 KiB | 00m00s [ 45/149] fpc-srpm-macros-0:1.3-14.fc42 100% | 117.9 KiB/s | 8.0 KiB | 00m00s [ 46/149] ghc-srpm-macros-0:1.9.2-2.fc4 100% | 134.7 KiB/s | 9.2 KiB | 00m00s [ 47/149] gnat-srpm-macros-0:6-7.fc42.n 100% | 126.6 KiB/s | 8.6 KiB | 00m00s [ 48/149] kernel-srpm-macros-0:1.0-25.f 100% | 145.2 KiB/s | 9.9 KiB | 00m00s [ 49/149] lua-srpm-macros-0:1-15.fc42.n 100% | 133.1 KiB/s | 8.9 KiB | 00m00s [ 50/149] ocaml-srpm-macros-0:10-4.fc42 100% | 135.4 KiB/s | 9.2 KiB | 00m00s [ 51/149] openblas-srpm-macros-0:2-19.f 100% | 114.2 KiB/s | 7.8 KiB | 00m00s [ 52/149] package-notes-srpm-macros-0:0 100% | 136.2 KiB/s | 9.3 KiB | 00m00s [ 53/149] perl-srpm-macros-0:1-57.fc42. 100% | 125.1 KiB/s | 8.5 KiB | 00m00s [ 54/149] tree-sitter-srpm-macros-0:0.1 100% | 165.2 KiB/s | 11.2 KiB | 00m00s [ 55/149] zig-srpm-macros-0:1-4.fc42.no 100% | 121.2 KiB/s | 8.2 KiB | 00m00s [ 56/149] zip-0:3.0-43.fc42.x86_64 100% | 3.4 MiB/s | 263.5 KiB | 00m00s [ 57/149] libblkid-0:2.40.4-7.fc42.x86_ 100% | 1.5 MiB/s | 122.5 KiB | 00m00s [ 58/149] libcap-ng-0:0.8.5-4.fc42.x86_ 100% | 473.1 KiB/s | 32.2 KiB | 00m00s [ 59/149] libfdisk-0:2.40.4-7.fc42.x86_ 100% | 2.1 MiB/s | 158.5 KiB | 00m00s [ 60/149] libsmartcols-0:2.40.4-7.fc42. 100% | 1.1 MiB/s | 81.2 KiB | 00m00s [ 61/149] libmount-0:2.40.4-7.fc42.x86_ 100% | 1.8 MiB/s | 155.1 KiB | 00m00s [ 62/149] xz-libs-1:5.8.1-2.fc42.x86_64 100% | 5.0 MiB/s | 113.0 KiB | 00m00s [ 63/149] libuuid-0:2.40.4-7.fc42.x86_6 100% | 372.6 KiB/s | 25.3 KiB | 00m00s [ 64/149] util-linux-core-0:2.40.4-7.fc 100% | 6.1 MiB/s | 529.2 KiB | 00m00s [ 65/149] pcre2-syntax-0:10.45-1.fc42.n 100% | 1.7 MiB/s | 161.7 KiB | 00m00s [ 66/149] ncurses-base-0:6.5-5.20250125 100% | 838.6 KiB/s | 88.1 KiB | 00m00s [ 67/149] libzstd-0:1.5.7-1.fc42.x86_64 100% | 4.0 MiB/s | 314.8 KiB | 00m00s [ 68/149] gnulib-l10n-0:20241231-1.fc42 100% | 7.3 MiB/s | 150.1 KiB | 00m00s [ 69/149] lz4-libs-0:1.10.0-2.fc42.x86_ 100% | 1.0 MiB/s | 78.1 KiB | 00m00s [ 70/149] rpm-sequoia-0:1.7.0-5.fc42.x8 100% | 9.4 MiB/s | 911.1 KiB | 00m00s [ 71/149] fedora-gpg-keys-0:42-1.noarch 100% | 1.8 MiB/s | 135.6 KiB | 00m00s [ 72/149] glibc-0:2.41-11.fc42.x86_64 100% | 41.6 MiB/s | 2.2 MiB | 00m00s [ 73/149] basesystem-0:11-22.fc42.noarc 100% | 107.2 KiB/s | 7.3 KiB | 00m00s [ 74/149] glibc-gconv-extra-0:2.41-11.f 100% | 15.2 MiB/s | 1.6 MiB | 00m00s [ 75/149] libgcc-0:15.2.1-1.fc42.x86_64 100% | 6.4 MiB/s | 131.6 KiB | 00m00s [ 76/149] add-determinism-0:0.6.0-1.fc4 100% | 5.0 MiB/s | 918.3 KiB | 00m00s [ 77/149] zlib-ng-compat-0:2.2.5-2.fc42 100% | 3.5 MiB/s | 79.2 KiB | 00m00s [ 78/149] libstdc++-0:15.2.1-1.fc42.x86 100% | 24.2 MiB/s | 917.8 KiB | 00m00s [ 79/149] filesystem-0:3.18-47.fc42.x86 100% | 34.2 MiB/s | 1.3 MiB | 00m00s [ 80/149] libxcrypt-0:4.4.38-7.fc42.x86 100% | 5.9 MiB/s | 127.2 KiB | 00m00s [ 81/149] libselinux-0:3.8-3.fc42.x86_6 100% | 1.5 MiB/s | 96.7 KiB | 00m00s [ 82/149] libsepol-0:3.8-1.fc42.x86_64 100% | 4.4 MiB/s | 348.9 KiB | 00m00s [ 83/149] audit-libs-0:4.1.1-1.fc42.x86 100% | 5.6 MiB/s | 138.5 KiB | 00m00s [ 84/149] systemd-libs-0:257.9-2.fc42.x 100% | 22.6 MiB/s | 810.3 KiB | 00m00s [ 85/149] libeconf-0:0.7.6-2.fc42.x86_6 100% | 1.9 MiB/s | 35.2 KiB | 00m00s [ 86/149] pam-libs-0:1.7.0-6.fc42.x86_6 100% | 2.2 MiB/s | 57.5 KiB | 00m00s [ 87/149] libsemanage-0:3.8.1-2.fc42.x8 100% | 6.3 MiB/s | 123.2 KiB | 00m00s [ 88/149] sqlite-libs-0:3.47.2-5.fc42.x 100% | 17.5 MiB/s | 753.8 KiB | 00m00s [ 89/149] lua-libs-0:5.4.8-1.fc42.x86_6 100% | 2.6 MiB/s | 131.9 KiB | 00m00s [ 90/149] openssl-libs-1:3.2.6-2.fc42.x 100% | 36.5 MiB/s | 2.3 MiB | 00m00s [ 91/149] elfutils-libelf-0:0.193-2.fc4 100% | 10.1 MiB/s | 207.8 KiB | 00m00s [ 92/149] elfutils-0:0.193-2.fc42.x86_6 100% | 20.7 MiB/s | 571.4 KiB | 00m00s [ 93/149] elfutils-debuginfod-client-0: 100% | 2.4 MiB/s | 46.9 KiB | 00m00s [ 94/149] elfutils-libs-0:0.193-2.fc42. 100% | 4.4 MiB/s | 270.2 KiB | 00m00s [ 95/149] file-libs-0:5.46-3.fc42.x86_6 100% | 26.8 MiB/s | 849.5 KiB | 00m00s [ 96/149] file-0:5.46-3.fc42.x86_64 100% | 2.5 MiB/s | 48.6 KiB | 00m00s [ 97/149] libgomp-0:15.2.1-1.fc42.x86_6 100% | 15.1 MiB/s | 371.6 KiB | 00m00s [ 98/149] json-c-0:0.18-2.fc42.x86_64 100% | 660.4 KiB/s | 44.9 KiB | 00m00s [ 99/149] debugedit-0:5.1-7.fc42.x86_64 100% | 2.8 MiB/s | 78.8 KiB | 00m00s [100/149] jansson-0:2.14-2.fc42.x86_64 100% | 662.6 KiB/s | 45.7 KiB | 00m00s [101/149] libarchive-0:3.8.1-1.fc42.x86 100% | 10.6 MiB/s | 421.6 KiB | 00m00s [102/149] binutils-0:2.44-6.fc42.x86_64 100% | 41.8 MiB/s | 5.8 MiB | 00m00s [103/149] pkgconf-pkg-config-0:2.3.0-2. 100% | 146.0 KiB/s | 9.9 KiB | 00m00s [104/149] libxml2-0:2.12.10-1.fc42.x86_ 100% | 7.6 MiB/s | 683.7 KiB | 00m00s [105/149] pkgconf-0:2.3.0-2.fc42.x86_64 100% | 650.5 KiB/s | 44.9 KiB | 00m00s [106/149] pkgconf-m4-0:2.3.0-2.fc42.noa 100% | 209.3 KiB/s | 14.2 KiB | 00m00s [107/149] curl-0:8.11.1-6.fc42.x86_64 100% | 10.2 MiB/s | 220.0 KiB | 00m00s [108/149] libpkgconf-0:2.3.0-2.fc42.x86 100% | 564.2 KiB/s | 38.4 KiB | 00m00s [109/149] dwz-0:0.16-1.fc42.x86_64 100% | 7.0 MiB/s | 135.5 KiB | 00m00s [110/149] efi-srpm-macros-0:6-3.fc42.no 100% | 1.2 MiB/s | 22.5 KiB | 00m00s [111/149] filesystem-srpm-macros-0:3.18 100% | 1.3 MiB/s | 26.1 KiB | 00m00s [112/149] fonts-srpm-macros-1:2.0.5-22. 100% | 1.4 MiB/s | 27.2 KiB | 00m00s [113/149] go-srpm-macros-0:3.8.0-1.fc42 100% | 1.5 MiB/s | 28.3 KiB | 00m00s [114/149] pyproject-srpm-macros-0:1.18. 100% | 762.7 KiB/s | 13.7 KiB | 00m00s [115/149] python-srpm-macros-0:3.13-5.f 100% | 1.2 MiB/s | 22.5 KiB | 00m00s [116/149] qt5-srpm-macros-0:5.15.17-1.f 100% | 484.2 KiB/s | 8.7 KiB | 00m00s [117/149] qt6-srpm-macros-0:6.9.2-1.fc4 100% | 493.8 KiB/s | 9.4 KiB | 00m00s [118/149] rust-srpm-macros-0:26.4-1.fc4 100% | 621.7 KiB/s | 11.2 KiB | 00m00s [119/149] ca-certificates-0:2025.2.80_v 100% | 30.7 MiB/s | 973.5 KiB | 00m00s [120/149] crypto-policies-0:20250707-1. 100% | 4.3 MiB/s | 96.0 KiB | 00m00s [121/149] elfutils-default-yama-scope-0 100% | 699.0 KiB/s | 12.6 KiB | 00m00s [122/149] p11-kit-0:0.25.8-1.fc42.x86_6 100% | 19.7 MiB/s | 503.5 KiB | 00m00s [123/149] p11-kit-trust-0:0.25.8-1.fc42 100% | 6.8 MiB/s | 139.2 KiB | 00m00s [124/149] alternatives-0:1.33-1.fc42.x8 100% | 2.1 MiB/s | 40.5 KiB | 00m00s [125/149] libffi-0:3.4.6-5.fc42.x86_64 100% | 587.3 KiB/s | 39.9 KiB | 00m00s [126/149] libtasn1-0:4.20.0-1.fc42.x86_ 100% | 1.0 MiB/s | 75.0 KiB | 00m00s [127/149] fedora-release-0:42-30.noarch 100% | 750.9 KiB/s | 13.5 KiB | 00m00s [128/149] systemd-standalone-sysusers-0 100% | 7.6 MiB/s | 154.8 KiB | 00m00s [129/149] fedora-release-identity-basic 100% | 794.1 KiB/s | 14.3 KiB | 00m00s [130/149] libcurl-0:8.11.1-6.fc42.x86_6 100% | 14.5 MiB/s | 371.7 KiB | 00m00s [131/149] xxhash-libs-0:0.8.3-2.fc42.x8 100% | 574.5 KiB/s | 39.1 KiB | 00m00s [132/149] libssh-0:0.11.3-1.fc42.x86_64 100% | 10.8 MiB/s | 233.0 KiB | 00m00s [133/149] gdb-minimal-0:16.3-1.fc42.x86 100% | 31.9 MiB/s | 4.4 MiB | 00m00s [134/149] libbrotli-0:1.1.0-6.fc42.x86_ 100% | 4.3 MiB/s | 339.8 KiB | 00m00s [135/149] libidn2-0:2.3.8-1.fc42.x86_64 100% | 2.3 MiB/s | 174.8 KiB | 00m00s [136/149] libssh-config-0:0.11.3-1.fc42 100% | 396.1 KiB/s | 9.1 KiB | 00m00s [137/149] libnghttp2-0:1.64.0-3.fc42.x8 100% | 1.1 MiB/s | 77.7 KiB | 00m00s [138/149] libpsl-0:0.21.5-5.fc42.x86_64 100% | 928.1 KiB/s | 64.0 KiB | 00m00s [139/149] publicsuffix-list-dafsa-0:202 100% | 2.9 MiB/s | 59.2 KiB | 00m00s [140/149] krb5-libs-0:1.21.3-6.fc42.x86 100% | 24.7 MiB/s | 759.8 KiB | 00m00s [141/149] libunistring-0:1.1-9.fc42.x86 100% | 6.2 MiB/s | 542.5 KiB | 00m00s [142/149] keyutils-libs-0:1.6.3-5.fc42. 100% | 463.7 KiB/s | 31.5 KiB | 00m00s [143/149] openldap-0:2.6.10-1.fc42.x86_ 100% | 12.0 MiB/s | 258.6 KiB | 00m00s [144/149] libcom_err-0:1.47.2-3.fc42.x8 100% | 396.0 KiB/s | 26.9 KiB | 00m00s [145/149] libverto-0:0.3.2-10.fc42.x86_ 100% | 305.9 KiB/s | 20.8 KiB | 00m00s [146/149] libevent-0:2.1.12-15.fc42.x86 100% | 3.3 MiB/s | 260.2 KiB | 00m00s [147/149] libtool-ltdl-0:2.5.4-4.fc42.x 100% | 531.9 KiB/s | 36.2 KiB | 00m00s [148/149] cyrus-sasl-lib-0:2.1.28-30.fc 100% | 8.5 MiB/s | 793.5 KiB | 00m00s [149/149] gdbm-libs-1:1.23-9.fc42.x86_6 100% | 826.5 KiB/s | 57.0 KiB | 00m00s -------------------------------------------------------------------------------- [149/149] Total 100% | 13.8 MiB/s | 52.4 MiB | 00m04s Running transaction Importing OpenPGP key 0x105EF944: UserID : "Fedora (42) " Fingerprint: B0F4950458F69E1150C6C5EDC8AC4916105EF944 From : file:///usr/share/distribution-gpg-keys/fedora/RPM-GPG-KEY-fedora-42-primary The key was successfully imported. [ 1/151] Verify package files 100% | 706.0 B/s | 149.0 B | 00m00s [ 2/151] Prepare transaction 100% | 1.9 KiB/s | 149.0 B | 00m00s [ 3/151] Installing libgcc-0:15.2.1-1. 100% | 130.9 MiB/s | 268.2 KiB | 00m00s [ 4/151] Installing publicsuffix-list- 100% | 68.2 MiB/s | 69.8 KiB | 00m00s [ 5/151] Installing libssh-config-0:0. 100% | 0.0 B/s | 816.0 B | 00m00s [ 6/151] Installing fedora-release-ide 100% | 882.8 KiB/s | 904.0 B | 00m00s [ 7/151] Installing fedora-gpg-keys-0: 100% | 19.0 MiB/s | 174.8 KiB | 00m00s [ 8/151] Installing fedora-repos-0:42- 100% | 5.6 MiB/s | 5.7 KiB | 00m00s [ 9/151] Installing fedora-release-com 100% | 12.0 MiB/s | 24.5 KiB | 00m00s [ 10/151] Installing fedora-release-0:4 100% | 3.2 KiB/s | 124.0 B | 00m00s >>> Running sysusers scriptlet: setup-0:2.15.0-13.fc42.noarch >>> Finished sysusers scriptlet: setup-0:2.15.0-13.fc42.noarch >>> Scriptlet output: >>> Creating group 'adm' with GID 4. >>> Creating group 'audio' with GID 63. >>> Creating group 'bin' with GID 1. >>> Creating group 'cdrom' with GID 11. >>> Creating group 'clock' with GID 103. >>> Creating group 'daemon' with GID 2. >>> Creating group 'dialout' with GID 18. >>> Creating group 'disk' with GID 6. >>> Creating group 'floppy' with GID 19. >>> Creating group 'ftp' with GID 50. >>> Creating group 'games' with GID 20. >>> Creating group 'input' with GID 104. >>> Creating group 'kmem' with GID 9. >>> Creating group 'kvm' with GID 36. >>> Creating group 'lock' with GID 54. >>> Creating group 'lp' with GID 7. >>> Creating group 'mail' with GID 12. >>> Creating group 'man' with GID 15. >>> Creating group 'mem' with GID 8. >>> Creating group 'nobody' with GID 65534. >>> Creating group 'render' with GID 105. >>> Creating group 'root' with GID 0. >>> Creating group 'sgx' with GID 106. >>> Creating group 'sys' with GID 3. >>> Creating group 'tape' with GID 33. >>> Creating group 'tty' with GID 5. >>> Creating group 'users' with GID 100. >>> Creating group 'utmp' with GID 22. >>> Creating group 'video' with GID 39. >>> Creating group 'wheel' with GID 10. >>> >>> Running sysusers scriptlet: setup-0:2.15.0-13.fc42.noarch >>> Finished sysusers scriptlet: setup-0:2.15.0-13.fc42.noarch >>> Scriptlet output: >>> Creating user 'adm' (adm) with UID 3 and GID 4. >>> Creating user 'bin' (bin) with UID 1 and GID 1. >>> Creating user 'daemon' (daemon) with UID 2 and GID 2. >>> Creating user 'ftp' (FTP User) with UID 14 and GID 50. >>> Creating user 'games' (games) with UID 12 and GID 20. >>> Creating user 'halt' (halt) with UID 7 and GID 0. >>> Creating user 'lp' (lp) with UID 4 and GID 7. >>> Creating user 'mail' (mail) with UID 8 and GID 12. >>> Creating user 'nobody' (Kernel Overflow User) with UID 65534 and GID 65534. >>> Creating user 'operator' (operator) with UID 11 and GID 0. >>> Creating user 'root' (Super User) with UID 0 and GID 0. >>> Creating user 'shutdown' (shutdown) with UID 6 and GID 0. >>> Creating user 'sync' (sync) with UID 5 and GID 0. >>> [ 11/151] Installing setup-0:2.15.0-13. 100% | 39.4 MiB/s | 726.7 KiB | 00m00s [ 12/151] Installing filesystem-0:3.18- 100% | 1.4 MiB/s | 212.8 KiB | 00m00s [ 13/151] Installing basesystem-0:11-22 100% | 0.0 B/s | 124.0 B | 00m00s [ 14/151] Installing rust-srpm-macros-0 100% | 5.4 MiB/s | 5.6 KiB | 00m00s [ 15/151] Installing qt6-srpm-macros-0: 100% | 0.0 B/s | 740.0 B | 00m00s [ 16/151] Installing qt5-srpm-macros-0: 100% | 0.0 B/s | 776.0 B | 00m00s [ 17/151] Installing pkgconf-m4-0:2.3.0 100% | 14.5 MiB/s | 14.8 KiB | 00m00s [ 18/151] Installing gnulib-l10n-0:2024 100% | 92.3 MiB/s | 661.9 KiB | 00m00s [ 19/151] Installing coreutils-common-0 100% | 232.4 MiB/s | 11.2 MiB | 00m00s [ 20/151] Installing pcre2-syntax-0:10. 100% | 135.0 MiB/s | 276.4 KiB | 00m00s [ 21/151] Installing ncurses-base-0:6.5 100% | 38.2 MiB/s | 352.2 KiB | 00m00s [ 22/151] Installing glibc-minimal-lang 100% | 0.0 B/s | 124.0 B | 00m00s [ 23/151] Installing ncurses-libs-0:6.5 100% | 132.9 MiB/s | 952.8 KiB | 00m00s [ 24/151] Installing glibc-0:2.41-11.fc 100% | 151.2 MiB/s | 6.7 MiB | 00m00s [ 25/151] Installing bash-0:5.2.37-1.fc 100% | 190.0 MiB/s | 8.2 MiB | 00m00s [ 26/151] Installing glibc-common-0:2.4 100% | 48.6 MiB/s | 1.0 MiB | 00m00s [ 27/151] Installing glibc-gconv-extra- 100% | 137.9 MiB/s | 7.3 MiB | 00m00s [ 28/151] Installing zlib-ng-compat-0:2 100% | 135.2 MiB/s | 138.4 KiB | 00m00s [ 29/151] Installing bzip2-libs-0:1.0.8 100% | 83.7 MiB/s | 85.7 KiB | 00m00s [ 30/151] Installing xz-libs-1:5.8.1-2. 100% | 213.8 MiB/s | 218.9 KiB | 00m00s [ 31/151] Installing libuuid-0:2.40.4-7 100% | 37.5 MiB/s | 38.4 KiB | 00m00s [ 32/151] Installing libblkid-0:2.40.4- 100% | 128.7 MiB/s | 263.5 KiB | 00m00s [ 33/151] Installing popt-0:1.19-8.fc42 100% | 27.2 MiB/s | 139.4 KiB | 00m00s [ 34/151] Installing readline-0:8.2-13. 100% | 237.9 MiB/s | 487.1 KiB | 00m00s [ 35/151] Installing gmp-1:6.3.0-4.fc42 100% | 198.6 MiB/s | 813.5 KiB | 00m00s [ 36/151] Installing libzstd-0:1.5.7-1. 100% | 197.5 MiB/s | 809.1 KiB | 00m00s [ 37/151] Installing elfutils-libelf-0: 100% | 291.6 MiB/s | 1.2 MiB | 00m00s [ 38/151] Installing libstdc++-0:15.2.1 100% | 257.8 MiB/s | 2.8 MiB | 00m00s [ 39/151] Installing libxcrypt-0:4.4.38 100% | 140.2 MiB/s | 287.2 KiB | 00m00s [ 40/151] Installing libattr-0:2.5.2-5. 100% | 27.4 MiB/s | 28.1 KiB | 00m00s [ 41/151] Installing libacl-0:2.3.2-3.f 100% | 38.2 MiB/s | 39.2 KiB | 00m00s [ 42/151] Installing dwz-0:0.16-1.fc42. 100% | 18.8 MiB/s | 288.5 KiB | 00m00s [ 43/151] Installing mpfr-0:4.2.2-1.fc4 100% | 202.7 MiB/s | 830.4 KiB | 00m00s [ 44/151] Installing gawk-0:5.3.1-1.fc4 100% | 77.0 MiB/s | 1.7 MiB | 00m00s [ 45/151] Installing unzip-0:6.0-66.fc4 100% | 25.6 MiB/s | 393.8 KiB | 00m00s [ 46/151] Installing file-libs-0:5.46-3 100% | 456.1 MiB/s | 11.9 MiB | 00m00s [ 47/151] Installing file-0:5.46-3.fc42 100% | 3.2 MiB/s | 101.7 KiB | 00m00s [ 48/151] Installing crypto-policies-0: 100% | 14.9 MiB/s | 167.8 KiB | 00m00s [ 49/151] Installing pcre2-0:10.45-1.fc 100% | 227.6 MiB/s | 699.1 KiB | 00m00s [ 50/151] Installing grep-0:3.11-10.fc4 100% | 45.6 MiB/s | 1.0 MiB | 00m00s [ 51/151] Installing xz-1:5.8.1-2.fc42. 100% | 57.9 MiB/s | 1.3 MiB | 00m00s [ 52/151] Installing libcap-ng-0:0.8.5- 100% | 73.1 MiB/s | 74.8 KiB | 00m00s [ 53/151] Installing audit-libs-0:4.1.1 100% | 124.2 MiB/s | 381.5 KiB | 00m00s [ 54/151] Installing libsmartcols-0:2.4 100% | 88.6 MiB/s | 181.5 KiB | 00m00s [ 55/151] Installing lz4-libs-0:1.10.0- 100% | 154.7 MiB/s | 158.5 KiB | 00m00s [ 56/151] Installing libsepol-0:3.8-1.f 100% | 201.9 MiB/s | 827.0 KiB | 00m00s [ 57/151] Installing libselinux-0:3.8-3 100% | 94.9 MiB/s | 194.3 KiB | 00m00s [ 58/151] Installing findutils-1:4.10.0 100% | 69.4 MiB/s | 1.9 MiB | 00m00s [ 59/151] Installing sed-0:4.9-4.fc42.x 100% | 44.5 MiB/s | 865.5 KiB | 00m00s [ 60/151] Installing libmount-0:2.40.4- 100% | 174.5 MiB/s | 357.3 KiB | 00m00s [ 61/151] Installing libeconf-0:0.7.6-2 100% | 32.3 MiB/s | 66.2 KiB | 00m00s [ 62/151] Installing pam-libs-0:1.7.0-6 100% | 63.0 MiB/s | 129.1 KiB | 00m00s [ 63/151] Installing libcap-0:2.73-2.fc 100% | 13.8 MiB/s | 212.1 KiB | 00m00s [ 64/151] Installing systemd-libs-0:257 100% | 248.0 MiB/s | 2.2 MiB | 00m00s [ 65/151] Installing lua-libs-0:5.4.8-1 100% | 137.7 MiB/s | 282.0 KiB | 00m00s [ 66/151] Installing libffi-0:3.4.6-5.f 100% | 81.7 MiB/s | 83.7 KiB | 00m00s [ 67/151] Installing libtasn1-0:4.20.0- 100% | 87.0 MiB/s | 178.1 KiB | 00m00s [ 68/151] Installing p11-kit-0:0.25.8-1 100% | 81.8 MiB/s | 2.3 MiB | 00m00s [ 69/151] Installing alternatives-0:1.3 100% | 4.8 MiB/s | 63.8 KiB | 00m00s [ 70/151] Installing libunistring-0:1.1 100% | 246.7 MiB/s | 1.7 MiB | 00m00s [ 71/151] Installing libidn2-0:2.3.8-1. 100% | 91.6 MiB/s | 562.7 KiB | 00m00s [ 72/151] Installing libpsl-0:0.21.5-5. 100% | 75.7 MiB/s | 77.5 KiB | 00m00s [ 73/151] Installing p11-kit-trust-0:0. 100% | 12.9 MiB/s | 448.3 KiB | 00m00s [ 74/151] Installing openssl-libs-1:3.2 100% | 269.8 MiB/s | 7.8 MiB | 00m00s [ 75/151] Installing coreutils-0:9.6-6. 100% | 97.4 MiB/s | 5.5 MiB | 00m00s [ 76/151] Installing ca-certificates-0: 100% | 1.1 MiB/s | 2.5 MiB | 00m02s [ 77/151] Installing gzip-0:1.13-3.fc42 100% | 22.9 MiB/s | 398.4 KiB | 00m00s [ 78/151] Installing rpm-sequoia-0:1.7. 100% | 268.3 MiB/s | 2.4 MiB | 00m00s [ 79/151] Installing libevent-0:2.1.12- 100% | 177.1 MiB/s | 906.9 KiB | 00m00s [ 80/151] Installing util-linux-core-0: 100% | 50.9 MiB/s | 1.4 MiB | 00m00s [ 81/151] Installing systemd-standalone 100% | 19.4 MiB/s | 277.8 KiB | 00m00s [ 82/151] Installing tar-2:1.35-5.fc42. 100% | 109.7 MiB/s | 3.0 MiB | 00m00s [ 83/151] Installing libsemanage-0:3.8. 100% | 99.7 MiB/s | 306.2 KiB | 00m00s [ 84/151] Installing shadow-utils-2:4.1 100% | 84.2 MiB/s | 4.0 MiB | 00m00s [ 85/151] Installing zstd-0:1.5.7-1.fc4 100% | 90.0 MiB/s | 1.7 MiB | 00m00s [ 86/151] Installing zip-0:3.0-43.fc42. 100% | 42.9 MiB/s | 702.4 KiB | 00m00s [ 87/151] Installing libfdisk-0:2.40.4- 100% | 182.3 MiB/s | 373.4 KiB | 00m00s [ 88/151] Installing libxml2-0:2.12.10- 100% | 84.8 MiB/s | 1.7 MiB | 00m00s [ 89/151] Installing libarchive-0:3.8.1 100% | 155.8 MiB/s | 957.1 KiB | 00m00s [ 90/151] Installing bzip2-0:1.0.8-20.f 100% | 5.3 MiB/s | 103.8 KiB | 00m00s [ 91/151] Installing add-determinism-0: 100% | 98.6 MiB/s | 2.5 MiB | 00m00s [ 92/151] Installing build-reproducibil 100% | 0.0 B/s | 1.0 KiB | 00m00s [ 93/151] Installing sqlite-libs-0:3.47 100% | 216.1 MiB/s | 1.5 MiB | 00m00s [ 94/151] Installing rpm-libs-0:4.20.1- 100% | 141.3 MiB/s | 723.4 KiB | 00m00s [ 95/151] Installing ed-0:1.21-2.fc42.x 100% | 8.1 MiB/s | 148.8 KiB | 00m00s [ 96/151] Installing patch-0:2.8-1.fc42 100% | 13.7 MiB/s | 224.3 KiB | 00m00s [ 97/151] Installing filesystem-srpm-ma 100% | 38.0 MiB/s | 38.9 KiB | 00m00s [ 98/151] Installing elfutils-default-y 100% | 127.7 KiB/s | 2.0 KiB | 00m00s [ 99/151] Installing elfutils-libs-0:0. 100% | 167.3 MiB/s | 685.2 KiB | 00m00s [100/151] Installing cpio-0:2.15-4.fc42 100% | 47.8 MiB/s | 1.1 MiB | 00m00s [101/151] Installing diffutils-0:3.12-1 100% | 67.9 MiB/s | 1.6 MiB | 00m00s [102/151] Installing json-c-0:0.18-2.fc 100% | 85.9 MiB/s | 88.0 KiB | 00m00s [103/151] Installing libgomp-0:15.2.1-1 100% | 264.9 MiB/s | 542.5 KiB | 00m00s [104/151] Installing rpm-build-libs-0:4 100% | 202.5 MiB/s | 207.4 KiB | 00m00s [105/151] Installing jansson-0:2.14-2.f 100% | 92.2 MiB/s | 94.4 KiB | 00m00s [106/151] Installing libpkgconf-0:2.3.0 100% | 77.4 MiB/s | 79.2 KiB | 00m00s [107/151] Installing pkgconf-0:2.3.0-2. 100% | 6.3 MiB/s | 91.0 KiB | 00m00s [108/151] Installing pkgconf-pkg-config 100% | 136.4 KiB/s | 1.8 KiB | 00m00s [109/151] Installing xxhash-libs-0:0.8. 100% | 89.4 MiB/s | 91.6 KiB | 00m00s [110/151] Installing libbrotli-0:1.1.0- 100% | 205.9 MiB/s | 843.6 KiB | 00m00s [111/151] Installing libnghttp2-0:1.64. 100% | 83.7 MiB/s | 171.5 KiB | 00m00s [112/151] Installing keyutils-libs-0:1. 100% | 58.3 MiB/s | 59.7 KiB | 00m00s [113/151] Installing libcom_err-0:1.47. 100% | 66.6 MiB/s | 68.2 KiB | 00m00s [114/151] Installing libverto-0:0.3.2-1 100% | 26.6 MiB/s | 27.2 KiB | 00m00s [115/151] Installing krb5-libs-0:1.21.3 100% | 191.0 MiB/s | 2.3 MiB | 00m00s [116/151] Installing libssh-0:0.11.3-1. 100% | 139.0 MiB/s | 569.2 KiB | 00m00s [117/151] Installing libtool-ltdl-0:2.5 100% | 69.6 MiB/s | 71.2 KiB | 00m00s [118/151] Installing gdbm-libs-1:1.23-9 100% | 128.5 MiB/s | 131.6 KiB | 00m00s [119/151] Installing cyrus-sasl-lib-0:2 100% | 88.6 MiB/s | 2.3 MiB | 00m00s [120/151] Installing openldap-0:2.6.10- 100% | 128.8 MiB/s | 659.6 KiB | 00m00s [121/151] Installing libcurl-0:8.11.1-6 100% | 203.9 MiB/s | 835.2 KiB | 00m00s [122/151] Installing elfutils-debuginfo 100% | 5.6 MiB/s | 86.2 KiB | 00m00s [123/151] Installing elfutils-0:0.193-2 100% | 116.9 MiB/s | 2.9 MiB | 00m00s [124/151] Installing binutils-0:2.44-6. 100% | 211.8 MiB/s | 25.8 MiB | 00m00s [125/151] Installing gdb-minimal-0:16.3 100% | 224.5 MiB/s | 13.2 MiB | 00m00s [126/151] Installing debugedit-0:5.1-7. 100% | 11.9 MiB/s | 195.4 KiB | 00m00s [127/151] Installing curl-0:8.11.1-6.fc 100% | 11.6 MiB/s | 453.1 KiB | 00m00s [128/151] Installing rpm-0:4.20.1-1.fc4 100% | 56.8 MiB/s | 2.5 MiB | 00m00s [129/151] Installing lua-srpm-macros-0: 100% | 1.9 MiB/s | 1.9 KiB | 00m00s [130/151] Installing tree-sitter-srpm-m 100% | 7.2 MiB/s | 7.4 KiB | 00m00s [131/151] Installing zig-srpm-macros-0: 100% | 0.0 B/s | 1.7 KiB | 00m00s [132/151] Installing efi-srpm-macros-0: 100% | 40.2 MiB/s | 41.1 KiB | 00m00s [133/151] Installing perl-srpm-macros-0 100% | 0.0 B/s | 1.1 KiB | 00m00s [134/151] Installing package-notes-srpm 100% | 0.0 B/s | 2.0 KiB | 00m00s [135/151] Installing openblas-srpm-macr 100% | 0.0 B/s | 392.0 B | 00m00s [136/151] Installing ocaml-srpm-macros- 100% | 0.0 B/s | 2.2 KiB | 00m00s [137/151] Installing kernel-srpm-macros 100% | 0.0 B/s | 2.3 KiB | 00m00s [138/151] Installing gnat-srpm-macros-0 100% | 0.0 B/s | 1.3 KiB | 00m00s [139/151] Installing ghc-srpm-macros-0: 100% | 0.0 B/s | 1.0 KiB | 00m00s [140/151] Installing fpc-srpm-macros-0: 100% | 0.0 B/s | 420.0 B | 00m00s [141/151] Installing ansible-srpm-macro 100% | 35.4 MiB/s | 36.2 KiB | 00m00s [142/151] Installing forge-srpm-macros- 100% | 39.3 MiB/s | 40.3 KiB | 00m00s [143/151] Installing fonts-srpm-macros- 100% | 55.7 MiB/s | 57.0 KiB | 00m00s [144/151] Installing go-srpm-macros-0:3 100% | 61.6 MiB/s | 63.0 KiB | 00m00s [145/151] Installing python-srpm-macros 100% | 50.9 MiB/s | 52.2 KiB | 00m00s [146/151] Installing redhat-rpm-config- 100% | 62.6 MiB/s | 192.2 KiB | 00m00s [147/151] Installing rpm-build-0:4.20.1 100% | 10.2 MiB/s | 177.4 KiB | 00m00s [148/151] Installing pyproject-srpm-mac 100% | 1.2 MiB/s | 2.5 KiB | 00m00s [149/151] Installing util-linux-0:2.40. 100% | 55.0 MiB/s | 3.5 MiB | 00m00s [150/151] Installing which-0:2.23-2.fc4 100% | 6.0 MiB/s | 85.7 KiB | 00m00s [151/151] Installing info-0:7.2-3.fc42. 100% | 112.4 KiB/s | 358.3 KiB | 00m03s Complete! Finish: installing minimal buildroot with dnf5 Start: creating root cache Finish: creating root cache Finish: chroot init INFO: Installed packages: INFO: add-determinism-0.6.0-1.fc42.x86_64 alternatives-1.33-1.fc42.x86_64 ansible-srpm-macros-1-17.1.fc42.noarch audit-libs-4.1.1-1.fc42.x86_64 basesystem-11-22.fc42.noarch bash-5.2.37-1.fc42.x86_64 binutils-2.44-6.fc42.x86_64 build-reproducibility-srpm-macros-0.6.0-1.fc42.noarch bzip2-1.0.8-20.fc42.x86_64 bzip2-libs-1.0.8-20.fc42.x86_64 ca-certificates-2025.2.80_v9.0.304-1.0.fc42.noarch coreutils-9.6-6.fc42.x86_64 coreutils-common-9.6-6.fc42.x86_64 cpio-2.15-4.fc42.x86_64 crypto-policies-20250707-1.gitad370a8.fc42.noarch curl-8.11.1-6.fc42.x86_64 cyrus-sasl-lib-2.1.28-30.fc42.x86_64 debugedit-5.1-7.fc42.x86_64 diffutils-3.12-1.fc42.x86_64 dwz-0.16-1.fc42.x86_64 ed-1.21-2.fc42.x86_64 efi-srpm-macros-6-3.fc42.noarch elfutils-0.193-2.fc42.x86_64 elfutils-debuginfod-client-0.193-2.fc42.x86_64 elfutils-default-yama-scope-0.193-2.fc42.noarch elfutils-libelf-0.193-2.fc42.x86_64 elfutils-libs-0.193-2.fc42.x86_64 fedora-gpg-keys-42-1.noarch fedora-release-42-30.noarch fedora-release-common-42-30.noarch fedora-release-identity-basic-42-30.noarch fedora-repos-42-1.noarch file-5.46-3.fc42.x86_64 file-libs-5.46-3.fc42.x86_64 filesystem-3.18-47.fc42.x86_64 filesystem-srpm-macros-3.18-47.fc42.noarch findutils-4.10.0-5.fc42.x86_64 fonts-srpm-macros-2.0.5-22.fc42.noarch forge-srpm-macros-0.4.0-2.fc42.noarch fpc-srpm-macros-1.3-14.fc42.noarch gawk-5.3.1-1.fc42.x86_64 gdb-minimal-16.3-1.fc42.x86_64 gdbm-libs-1.23-9.fc42.x86_64 ghc-srpm-macros-1.9.2-2.fc42.noarch glibc-2.41-11.fc42.x86_64 glibc-common-2.41-11.fc42.x86_64 glibc-gconv-extra-2.41-11.fc42.x86_64 glibc-minimal-langpack-2.41-11.fc42.x86_64 gmp-6.3.0-4.fc42.x86_64 gnat-srpm-macros-6-7.fc42.noarch gnulib-l10n-20241231-1.fc42.noarch go-srpm-macros-3.8.0-1.fc42.noarch gpg-pubkey-105ef944-65ca83d1 grep-3.11-10.fc42.x86_64 gzip-1.13-3.fc42.x86_64 info-7.2-3.fc42.x86_64 jansson-2.14-2.fc42.x86_64 json-c-0.18-2.fc42.x86_64 kernel-srpm-macros-1.0-25.fc42.noarch keyutils-libs-1.6.3-5.fc42.x86_64 krb5-libs-1.21.3-6.fc42.x86_64 libacl-2.3.2-3.fc42.x86_64 libarchive-3.8.1-1.fc42.x86_64 libattr-2.5.2-5.fc42.x86_64 libblkid-2.40.4-7.fc42.x86_64 libbrotli-1.1.0-6.fc42.x86_64 libcap-2.73-2.fc42.x86_64 libcap-ng-0.8.5-4.fc42.x86_64 libcom_err-1.47.2-3.fc42.x86_64 libcurl-8.11.1-6.fc42.x86_64 libeconf-0.7.6-2.fc42.x86_64 libevent-2.1.12-15.fc42.x86_64 libfdisk-2.40.4-7.fc42.x86_64 libffi-3.4.6-5.fc42.x86_64 libgcc-15.2.1-1.fc42.x86_64 libgomp-15.2.1-1.fc42.x86_64 libidn2-2.3.8-1.fc42.x86_64 libmount-2.40.4-7.fc42.x86_64 libnghttp2-1.64.0-3.fc42.x86_64 libpkgconf-2.3.0-2.fc42.x86_64 libpsl-0.21.5-5.fc42.x86_64 libselinux-3.8-3.fc42.x86_64 libsemanage-3.8.1-2.fc42.x86_64 libsepol-3.8-1.fc42.x86_64 libsmartcols-2.40.4-7.fc42.x86_64 libssh-0.11.3-1.fc42.x86_64 libssh-config-0.11.3-1.fc42.noarch libstdc++-15.2.1-1.fc42.x86_64 libtasn1-4.20.0-1.fc42.x86_64 libtool-ltdl-2.5.4-4.fc42.x86_64 libunistring-1.1-9.fc42.x86_64 libuuid-2.40.4-7.fc42.x86_64 libverto-0.3.2-10.fc42.x86_64 libxcrypt-4.4.38-7.fc42.x86_64 libxml2-2.12.10-1.fc42.x86_64 libzstd-1.5.7-1.fc42.x86_64 lua-libs-5.4.8-1.fc42.x86_64 lua-srpm-macros-1-15.fc42.noarch lz4-libs-1.10.0-2.fc42.x86_64 mpfr-4.2.2-1.fc42.x86_64 ncurses-base-6.5-5.20250125.fc42.noarch ncurses-libs-6.5-5.20250125.fc42.x86_64 ocaml-srpm-macros-10-4.fc42.noarch openblas-srpm-macros-2-19.fc42.noarch openldap-2.6.10-1.fc42.x86_64 openssl-libs-3.2.6-2.fc42.x86_64 p11-kit-0.25.8-1.fc42.x86_64 p11-kit-trust-0.25.8-1.fc42.x86_64 package-notes-srpm-macros-0.5-13.fc42.noarch pam-libs-1.7.0-6.fc42.x86_64 patch-2.8-1.fc42.x86_64 pcre2-10.45-1.fc42.x86_64 pcre2-syntax-10.45-1.fc42.noarch perl-srpm-macros-1-57.fc42.noarch pkgconf-2.3.0-2.fc42.x86_64 pkgconf-m4-2.3.0-2.fc42.noarch pkgconf-pkg-config-2.3.0-2.fc42.x86_64 popt-1.19-8.fc42.x86_64 publicsuffix-list-dafsa-20250616-1.fc42.noarch pyproject-srpm-macros-1.18.4-1.fc42.noarch python-srpm-macros-3.13-5.fc42.noarch qt5-srpm-macros-5.15.17-1.fc42.noarch qt6-srpm-macros-6.9.2-1.fc42.noarch readline-8.2-13.fc42.x86_64 redhat-rpm-config-342-4.fc42.noarch rpm-4.20.1-1.fc42.x86_64 rpm-build-4.20.1-1.fc42.x86_64 rpm-build-libs-4.20.1-1.fc42.x86_64 rpm-libs-4.20.1-1.fc42.x86_64 rpm-sequoia-1.7.0-5.fc42.x86_64 rust-srpm-macros-26.4-1.fc42.noarch sed-4.9-4.fc42.x86_64 setup-2.15.0-13.fc42.noarch shadow-utils-4.17.4-1.fc42.x86_64 sqlite-libs-3.47.2-5.fc42.x86_64 systemd-libs-257.9-2.fc42.x86_64 systemd-standalone-sysusers-257.9-2.fc42.x86_64 tar-1.35-5.fc42.x86_64 tree-sitter-srpm-macros-0.1.0-8.fc42.noarch unzip-6.0-66.fc42.x86_64 util-linux-2.40.4-7.fc42.x86_64 util-linux-core-2.40.4-7.fc42.x86_64 which-2.23-2.fc42.x86_64 xxhash-libs-0.8.3-2.fc42.x86_64 xz-5.8.1-2.fc42.x86_64 xz-libs-5.8.1-2.fc42.x86_64 zig-srpm-macros-1-4.fc42.noarch zip-3.0-43.fc42.x86_64 zlib-ng-compat-2.2.5-2.fc42.x86_64 zstd-1.5.7-1.fc42.x86_64 Start: buildsrpm Start: rpmbuild -bs Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1760486400 Wrote: /builddir/build/SRPMS/ollama-0.12.5-1.fc42.src.rpm Finish: rpmbuild -bs INFO: chroot_scan: 1 files copied to /var/lib/copr-rpmbuild/results/chroot_scan INFO: /var/lib/mock/fedora-42-x86_64-1760542611.653638/root/var/log/dnf5.log INFO: chroot_scan: creating tarball /var/lib/copr-rpmbuild/results/chroot_scan.tar.gz /bin/tar: Removing leading `/' from member names Finish: buildsrpm INFO: Done(/var/lib/copr-rpmbuild/workspace/workdir-l63xe8vc/ollama/ollama.spec) Config(child) 0 minutes 35 seconds INFO: Results and/or logs in: /var/lib/copr-rpmbuild/results INFO: Cleaning up build root ('cleanup_on_success=True') Start: clean chroot INFO: unmounting tmpfs. Finish: clean chroot INFO: Start(/var/lib/copr-rpmbuild/results/ollama-0.12.5-1.fc42.src.rpm) Config(fedora-42-x86_64) Start(bootstrap): chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-42-x86_64-bootstrap-1760542611.653638/root. INFO: reusing tmpfs at /var/lib/mock/fedora-42-x86_64-bootstrap-1760542611.653638/root. INFO: calling preinit hooks INFO: enabled root cache INFO: enabled package manager cache Start(bootstrap): cleaning package manager metadata Finish(bootstrap): cleaning package manager metadata Finish(bootstrap): chroot init Start: chroot init INFO: mounting tmpfs at /var/lib/mock/fedora-42-x86_64-1760542611.653638/root. INFO: calling preinit hooks INFO: enabled root cache Start: unpacking root cache Finish: unpacking root cache INFO: enabled package manager cache Start: cleaning package manager metadata Finish: cleaning package manager metadata INFO: enabled HW Info plugin INFO: Buildroot is handled by package management downloaded with a bootstrap image: rpm-4.20.1-1.fc42.x86_64 rpm-sequoia-1.7.0-5.fc42.x86_64 dnf5-5.2.16.0-1.fc42.x86_64 dnf5-plugins-5.2.16.0-1.fc42.x86_64 Finish: chroot init Start: build phase for ollama-0.12.5-1.fc42.src.rpm Start: build setup for ollama-0.12.5-1.fc42.src.rpm Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1760486400 Wrote: /builddir/build/SRPMS/ollama-0.12.5-1.fc42.src.rpm Updating and loading repositories: Additional repo https_developer_downlo 100% | 25.2 KiB/s | 3.9 KiB | 00m00s Additional repo https_developer_downlo 100% | 25.3 KiB/s | 3.9 KiB | 00m00s Copr repository 100% | 11.7 KiB/s | 1.8 KiB | 00m00s fedora 100% | 133.6 KiB/s | 30.3 KiB | 00m00s updates 100% | 152.4 KiB/s | 29.0 KiB | 00m00s Repositories loaded. Package Arch Version Repository Size Installing: ccache x86_64 4.10.2-2.fc42 fedora 1.6 MiB cmake x86_64 3.31.6-2.fc42 fedora 34.2 MiB cuda-12-9 x86_64 12.9.1-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 0.0 B gcc-c++ x86_64 15.2.1-1.fc42 updates 41.3 MiB git x86_64 2.51.0-2.fc42 updates 56.4 KiB golang x86_64 1.24.8-1.fc42 updates 8.9 MiB systemd x86_64 257.9-2.fc42 updates 12.1 MiB Installing dependencies: OpenCL-ICD-Loader x86_64 3.0.6-2.20241023git5907ac1.fc42 fedora 70.7 KiB abattis-cantarell-vf-fonts noarch 0.301-14.fc42 fedora 192.7 KiB alsa-lib x86_64 1.2.14-3.fc42 updates 1.4 MiB annobin-docs noarch 12.94-1.fc42 updates 98.9 KiB annobin-plugin-gcc x86_64 12.94-1.fc42 updates 993.5 KiB authselect x86_64 1.5.1-1.fc42 fedora 153.9 KiB authselect-libs x86_64 1.5.1-1.fc42 fedora 825.0 KiB avahi-libs x86_64 0.9~rc2-2.fc42 fedora 183.6 KiB cairo x86_64 1.18.2-3.fc42 fedora 1.8 MiB cmake-data noarch 3.31.6-2.fc42 fedora 8.5 MiB cmake-filesystem x86_64 3.31.6-2.fc42 fedora 0.0 B cmake-rpm-macros noarch 3.31.6-2.fc42 fedora 7.7 KiB cpp x86_64 15.2.1-1.fc42 updates 37.9 MiB cracklib x86_64 2.9.11-7.fc42 fedora 242.4 KiB cuda-cccl-12-9 x86_64 12.9.27-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 12.7 MiB cuda-command-line-tools-12-9 x86_64 12.9.1-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 0.0 B cuda-compiler-12-9 x86_64 12.9.1-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 0.0 B cuda-crt-12-9 x86_64 12.9.86-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 928.8 KiB cuda-cudart-12-9 x86_64 12.9.79-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 785.8 KiB cuda-cudart-devel-12-9 x86_64 12.9.79-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 8.5 MiB cuda-cuobjdump-12-9 x86_64 12.9.82-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 665.7 KiB cuda-cupti-12-9 x86_64 12.9.79-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 143.2 MiB cuda-cuxxfilt-12-9 x86_64 12.9.82-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 1.0 MiB cuda-demo-suite-12-9 x86_64 12.9.79-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 13.9 MiB cuda-documentation-12-9 x86_64 12.9.88-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 538.3 KiB cuda-driver-devel-12-9 x86_64 12.9.79-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 131.0 KiB cuda-gdb-12-9 x86_64 12.9.79-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 89.7 MiB cuda-libraries-12-9 x86_64 12.9.1-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 0.0 B cuda-libraries-devel-12-9 x86_64 12.9.1-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 0.0 B cuda-nsight-12-9 x86_64 12.9.79-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 113.2 MiB cuda-nsight-compute-12-9 x86_64 12.9.1-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 7.3 KiB cuda-nsight-systems-12-9 x86_64 12.9.1-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 1.7 KiB cuda-nvcc-12-9 x86_64 12.9.86-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 317.8 MiB cuda-nvdisasm-12-9 x86_64 12.9.88-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 6.1 MiB cuda-nvml-devel-12-9 x86_64 12.9.79-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 1.4 MiB cuda-nvprof-12-9 x86_64 12.9.79-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 10.6 MiB cuda-nvprune-12-9 x86_64 12.9.82-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 181.0 KiB cuda-nvrtc-12-9 x86_64 12.9.86-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 216.9 MiB cuda-nvrtc-devel-12-9 x86_64 12.9.86-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 248.0 MiB cuda-nvtx-12-9 x86_64 12.9.79-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 415.0 KiB cuda-nvvm-12-9 x86_64 12.9.86-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 132.6 MiB cuda-nvvp-12-9 x86_64 12.9.79-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 127.7 MiB cuda-opencl-12-9 x86_64 12.9.19-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 91.7 KiB cuda-opencl-devel-12-9 x86_64 12.9.19-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 741.1 KiB cuda-profiler-api-12-9 x86_64 12.9.79-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 73.4 KiB cuda-runtime-12-9 x86_64 12.9.1-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 0.0 B cuda-sandbox-devel-12-9 x86_64 12.9.19-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 146.3 KiB cuda-sanitizer-12-9 x86_64 12.9.79-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 37.3 MiB cuda-toolkit-12-9 x86_64 12.9.1-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 3.2 KiB cuda-toolkit-12-9-config-common noarch 12.9.79-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 0.0 B cuda-toolkit-12-config-common noarch 12.9.79-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 44.0 B cuda-toolkit-config-common noarch 13.0.96-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora42_x86_64 41.0 B cuda-tools-12-9 x86_64 12.9.1-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 0.0 B cuda-visual-tools-12-9 x86_64 12.9.1-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 0.0 B cups-filesystem noarch 1:2.4.14-2.fc42 updates 0.0 B cups-libs x86_64 1:2.4.14-2.fc42 updates 618.7 KiB dbus x86_64 1:1.16.0-3.fc42 fedora 0.0 B dbus-broker x86_64 36-6.fc42 updates 387.1 KiB dbus-common noarch 1:1.16.0-3.fc42 fedora 11.2 KiB dbus-libs x86_64 1:1.16.0-3.fc42 fedora 349.5 KiB default-fonts-core-sans noarch 4.2-4.fc42 fedora 11.9 KiB dkms noarch 3.2.2-1.fc42 updates 209.7 KiB elfutils-libelf-devel x86_64 0.193-2.fc42 updates 50.0 KiB emacs-filesystem noarch 1:30.0-4.fc42 fedora 0.0 B expat x86_64 2.7.2-1.fc42 updates 298.6 KiB fmt x86_64 11.1.4-1.fc42 fedora 263.9 KiB fontconfig x86_64 2.16.0-2.fc42 fedora 764.7 KiB fonts-filesystem noarch 1:2.0.5-22.fc42 updates 0.0 B freetype x86_64 2.13.3-2.fc42 fedora 858.2 KiB gcc x86_64 15.2.1-1.fc42 updates 111.2 MiB gcc-plugin-annobin x86_64 15.2.1-1.fc42 updates 57.1 KiB gdbm x86_64 1:1.23-9.fc42 fedora 460.3 KiB gds-tools-12-9 x86_64 1.14.1.1-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 59.9 MiB git-core x86_64 2.51.0-2.fc42 updates 23.6 MiB git-core-doc noarch 2.51.0-2.fc42 updates 17.7 MiB glib2 x86_64 2.84.4-1.fc42 updates 14.7 MiB glibc-devel x86_64 2.41-11.fc42 updates 2.3 MiB gnutls x86_64 3.8.10-1.fc42 updates 3.8 MiB go-filesystem x86_64 3.8.0-1.fc42 updates 0.0 B golang-bin x86_64 1.24.8-1.fc42 updates 122.0 MiB golang-src noarch 1.24.8-1.fc42 updates 79.2 MiB google-noto-fonts-common noarch 20250301-1.fc42 fedora 17.7 KiB google-noto-sans-vf-fonts noarch 20250301-1.fc42 fedora 1.4 MiB graphite2 x86_64 1.3.14-18.fc42 fedora 195.8 KiB groff-base x86_64 1.23.0-8.fc42 fedora 3.9 MiB harfbuzz x86_64 10.4.0-1.fc42 fedora 2.7 MiB hiredis x86_64 1.2.0-6.fc42 fedora 105.9 KiB hwdata noarch 0.400-1.fc42 updates 9.6 MiB java-21-openjdk x86_64 1:21.0.8.0.9-1.fc42 updates 1.0 MiB java-21-openjdk-headless x86_64 1:21.0.8.0.9-1.fc42 updates 197.8 MiB javapackages-filesystem noarch 6.4.0-5.fc42 fedora 2.0 KiB jsoncpp x86_64 1.9.6-1.fc42 fedora 261.6 KiB kernel-headers x86_64 6.16.2-200.fc42 updates 6.7 MiB kmod x86_64 33-3.fc42 fedora 235.4 KiB kmod-nvidia-open-dkms noarch 3:580.95.05-1.fc42 https_developer_download_nvidia_com_compute_cuda_repos_fedora42_x86_64 119.3 MiB less x86_64 679-1.fc42 updates 406.1 KiB libICE x86_64 1.1.2-2.fc42 fedora 198.4 KiB libSM x86_64 1.2.5-2.fc42 fedora 105.0 KiB libX11 x86_64 1.8.12-1.fc42 updates 1.3 MiB libX11-common noarch 1.8.12-1.fc42 updates 1.2 MiB libX11-xcb x86_64 1.8.12-1.fc42 updates 10.9 KiB libXau x86_64 1.0.12-2.fc42 fedora 76.9 KiB libXcomposite x86_64 0.4.6-5.fc42 fedora 44.4 KiB libXdamage x86_64 1.1.6-5.fc42 fedora 43.6 KiB libXext x86_64 1.3.6-3.fc42 fedora 90.0 KiB libXfixes x86_64 6.0.1-5.fc42 fedora 30.2 KiB libXi x86_64 1.8.2-2.fc42 fedora 84.6 KiB libXrandr x86_64 1.5.4-5.fc42 fedora 55.8 KiB libXrender x86_64 0.9.12-2.fc42 fedora 50.0 KiB libXtst x86_64 1.2.5-2.fc42 fedora 33.5 KiB libb2 x86_64 0.98.1-13.fc42 fedora 46.1 KiB libcbor x86_64 0.11.0-3.fc42 fedora 77.8 KiB libcublas-12-9 x86_64 12.9.1.4-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 815.6 MiB libcublas-devel-12-9 x86_64 12.9.1.4-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 1.2 GiB libcufft-12-9 x86_64 11.4.1.4-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 277.2 MiB libcufft-devel-12-9 x86_64 11.4.1.4-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 567.3 MiB libcufile-12-9 x86_64 1.14.1.1-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 3.2 MiB libcufile-devel-12-9 x86_64 1.14.1.1-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 27.9 MiB libcurand-12-9 x86_64 10.3.10.19-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 159.3 MiB libcurand-devel-12-9 x86_64 10.3.10.19-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 161.3 MiB libcusolver-12-9 x86_64 11.7.5.82-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 470.6 MiB libcusolver-devel-12-9 x86_64 11.7.5.82-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 332.5 MiB libcusparse-12-9 x86_64 12.5.10.65-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 463.0 MiB libcusparse-devel-12-9 x86_64 12.5.10.65-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 960.3 MiB libdrm x86_64 2.4.126-1.fc42 updates 399.9 KiB libedit x86_64 3.1-55.20250104cvs.fc42 fedora 244.1 KiB libfido2 x86_64 1.15.0-3.fc42 fedora 242.1 KiB libfontenc x86_64 1.1.8-3.fc42 fedora 70.9 KiB libglvnd x86_64 1:1.7.0-7.fc42 fedora 530.2 KiB libglvnd-egl x86_64 1:1.7.0-7.fc42 fedora 68.7 KiB libglvnd-opengl x86_64 1:1.7.0-7.fc42 fedora 148.8 KiB libmpc x86_64 1.3.1-7.fc42 fedora 164.5 KiB libnpp-12-9 x86_64 12.4.1.87-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 393.0 MiB libnpp-devel-12-9 x86_64 12.4.1.87-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 406.2 MiB libnsl2 x86_64 2.0.1-3.fc42 fedora 57.9 KiB libnvfatbin-12-9 x86_64 12.9.82-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 2.4 MiB libnvfatbin-devel-12-9 x86_64 12.9.82-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 2.3 MiB libnvidia-cfg x86_64 3:580.95.05-1.fc42 https_developer_download_nvidia_com_compute_cuda_repos_fedora42_x86_64 386.1 KiB libnvidia-gpucomp x86_64 3:580.95.05-1.fc42 https_developer_download_nvidia_com_compute_cuda_repos_fedora42_x86_64 68.9 MiB libnvidia-ml x86_64 3:580.95.05-1.fc42 https_developer_download_nvidia_com_compute_cuda_repos_fedora42_x86_64 2.2 MiB libnvjitlink-12-9 x86_64 12.9.86-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 91.6 MiB libnvjitlink-devel-12-9 x86_64 12.9.86-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 127.6 MiB libnvjpeg-12-9 x86_64 12.4.0.76-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 9.0 MiB libnvjpeg-devel-12-9 x86_64 12.4.0.76-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 9.4 MiB libpciaccess x86_64 0.16-15.fc42 fedora 44.5 KiB libpng x86_64 2:1.6.44-2.fc42 fedora 241.7 KiB libpwquality x86_64 1.4.5-12.fc42 fedora 409.3 KiB libseccomp x86_64 2.5.5-2.fc41 fedora 173.3 KiB libstdc++-devel x86_64 15.2.1-1.fc42 updates 16.1 MiB libtirpc x86_64 1.3.7-0.fc42 updates 198.9 KiB libuv x86_64 1:1.51.0-1.fc42 updates 570.2 KiB libwayland-client x86_64 1.24.0-1.fc42 updates 62.0 KiB libwayland-server x86_64 1.24.0-1.fc42 updates 78.5 KiB libxcb x86_64 1.17.0-5.fc42 fedora 1.1 MiB libxcrypt-devel x86_64 4.4.38-7.fc42 updates 30.8 KiB libxkbcommon x86_64 1.8.1-1.fc42 fedora 367.4 KiB libxkbcommon-x11 x86_64 1.8.1-1.fc42 fedora 35.5 KiB libxshmfence x86_64 1.3.2-6.fc42 fedora 12.4 KiB libzstd-devel x86_64 1.5.7-1.fc42 fedora 208.0 KiB lksctp-tools x86_64 1.0.21-1.fc42 updates 251.0 KiB llvm-filesystem x86_64 20.1.8-4.fc42 updates 0.0 B llvm-libs x86_64 20.1.8-4.fc42 updates 137.1 MiB lm_sensors-libs x86_64 3.6.0-22.fc42 fedora 85.8 KiB make x86_64 1:4.4.1-10.fc42 fedora 1.8 MiB mesa-dri-drivers x86_64 25.1.9-1.fc42 updates 46.7 MiB mesa-filesystem x86_64 25.1.9-1.fc42 updates 3.6 KiB mesa-libEGL x86_64 25.1.9-1.fc42 updates 334.9 KiB mesa-libgbm x86_64 25.1.9-1.fc42 updates 19.7 KiB mkfontscale x86_64 1.2.3-2.fc42 fedora 45.0 KiB mpdecimal x86_64 4.0.1-1.fc42 updates 217.2 KiB ncurses x86_64 6.5-5.20250125.fc42 fedora 608.1 KiB nettle x86_64 3.10.1-1.fc42 fedora 790.5 KiB nsight-compute-2025.2.1 x86_64 2025.2.1.3-1 https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64 1.1 GiB nsight-systems-2025.3.2 x86_64 2025.3.2.474_253236389321v0-0 https_developer_download_nvidia_com_compute_cuda_repos_fedora42_x86_64 1.0 GiB nspr x86_64 4.37.0-3.fc42 updates 315.5 KiB nss x86_64 3.116.0-1.fc42 updates 1.9 MiB nss-softokn x86_64 3.116.0-1.fc42 updates 1.9 MiB nss-softokn-freebl x86_64 3.116.0-1.fc42 updates 848.3 KiB nss-sysinit x86_64 3.116.0-1.fc42 updates 18.1 KiB nss-util x86_64 3.116.0-1.fc42 updates 200.8 KiB numactl-libs x86_64 2.0.19-2.fc42 fedora 52.9 KiB nvidia-driver-cuda x86_64 3:580.95.05-1.fc42 https_developer_download_nvidia_com_compute_cuda_repos_fedora42_x86_64 1.4 MiB nvidia-driver-cuda-libs x86_64 3:580.95.05-1.fc42 https_developer_download_nvidia_com_compute_cuda_repos_fedora42_x86_64 343.5 MiB nvidia-kmod-common noarch 3:580.95.05-1.fc42 https_developer_download_nvidia_com_compute_cuda_repos_fedora42_x86_64 100.4 MiB nvidia-modprobe x86_64 3:580.95.05-1.fc42 https_developer_download_nvidia_com_compute_cuda_repos_fedora42_x86_64 54.8 KiB nvidia-persistenced x86_64 3:580.95.05-1.fc42 https_developer_download_nvidia_com_compute_cuda_repos_fedora42_x86_64 58.1 KiB opencl-filesystem noarch 1.0-22.fc42 fedora 0.0 B openssh x86_64 9.9p1-11.fc42 updates 1.4 MiB openssh-clients x86_64 9.9p1-11.fc42 updates 2.7 MiB openssl x86_64 1:3.2.6-2.fc42 updates 1.7 MiB pam x86_64 1.7.0-6.fc42 updates 1.6 MiB perl-AutoLoader noarch 5.74-519.fc42 updates 20.5 KiB perl-B x86_64 1.89-519.fc42 updates 498.0 KiB perl-Carp noarch 1.54-512.fc42 fedora 46.6 KiB perl-Class-Struct noarch 0.68-519.fc42 updates 25.4 KiB perl-Data-Dumper x86_64 2.189-513.fc42 fedora 115.6 KiB perl-Digest noarch 1.20-512.fc42 fedora 35.3 KiB perl-Digest-MD5 x86_64 2.59-6.fc42 fedora 59.7 KiB perl-DynaLoader x86_64 1.56-519.fc42 updates 32.1 KiB perl-Encode x86_64 4:3.21-512.fc42 fedora 4.7 MiB perl-Errno x86_64 1.38-519.fc42 updates 8.3 KiB perl-Error noarch 1:0.17030-1.fc42 fedora 76.7 KiB perl-Exporter noarch 5.78-512.fc42 fedora 54.3 KiB perl-Fcntl x86_64 1.18-519.fc42 updates 48.9 KiB perl-File-Basename noarch 2.86-519.fc42 updates 14.0 KiB perl-File-Path noarch 2.18-512.fc42 fedora 63.5 KiB perl-File-Temp noarch 1:0.231.100-512.fc42 fedora 162.3 KiB perl-File-stat noarch 1.14-519.fc42 updates 12.5 KiB perl-FileHandle noarch 2.05-519.fc42 updates 9.3 KiB perl-Getopt-Long noarch 1:2.58-3.fc42 fedora 144.5 KiB perl-Getopt-Std noarch 1.14-519.fc42 updates 11.2 KiB perl-Git noarch 2.51.0-2.fc42 updates 64.4 KiB perl-HTTP-Tiny noarch 0.090-2.fc42 fedora 154.4 KiB perl-IO x86_64 1.55-519.fc42 updates 147.0 KiB perl-IO-Socket-IP noarch 0.43-2.fc42 fedora 100.3 KiB perl-IO-Socket-SSL noarch 2.089-2.fc42 fedora 703.3 KiB perl-IPC-Open3 noarch 1.22-519.fc42 updates 22.5 KiB perl-MIME-Base32 noarch 1.303-23.fc42 fedora 30.7 KiB perl-MIME-Base64 x86_64 3.16-512.fc42 fedora 42.0 KiB perl-Net-SSLeay x86_64 1.94-8.fc42 fedora 1.3 MiB perl-POSIX x86_64 2.20-519.fc42 updates 231.0 KiB perl-PathTools x86_64 3.91-513.fc42 fedora 180.0 KiB perl-Pod-Escapes noarch 1:1.07-512.fc42 fedora 24.9 KiB perl-Pod-Perldoc noarch 3.28.01-513.fc42 fedora 163.7 KiB perl-Pod-Simple noarch 1:3.45-512.fc42 fedora 560.8 KiB perl-Pod-Usage noarch 4:2.05-1.fc42 fedora 86.3 KiB perl-Scalar-List-Utils x86_64 5:1.70-1.fc42 updates 144.9 KiB perl-SelectSaver noarch 1.02-519.fc42 updates 2.2 KiB perl-Socket x86_64 4:2.038-512.fc42 fedora 119.9 KiB perl-Storable x86_64 1:3.32-512.fc42 fedora 232.3 KiB perl-Symbol noarch 1.09-519.fc42 updates 6.8 KiB perl-Term-ANSIColor noarch 5.01-513.fc42 fedora 97.5 KiB perl-Term-Cap noarch 1.18-512.fc42 fedora 29.3 KiB perl-TermReadKey x86_64 2.38-24.fc42 fedora 64.0 KiB perl-Text-ParseWords noarch 3.31-512.fc42 fedora 13.6 KiB perl-Text-Tabs+Wrap noarch 2024.001-512.fc42 fedora 22.6 KiB perl-Time-Local noarch 2:1.350-512.fc42 fedora 68.9 KiB perl-URI noarch 5.31-2.fc42 fedora 257.0 KiB perl-base noarch 2.27-519.fc42 updates 12.5 KiB perl-constant noarch 1.33-513.fc42 fedora 26.2 KiB perl-if noarch 0.61.000-519.fc42 updates 5.8 KiB perl-interpreter x86_64 4:5.40.3-519.fc42 updates 118.4 KiB perl-lib x86_64 0.65-519.fc42 updates 8.5 KiB perl-libnet noarch 3.15-513.fc42 fedora 289.4 KiB perl-libs x86_64 4:5.40.3-519.fc42 updates 9.8 MiB perl-locale noarch 1.12-519.fc42 updates 6.5 KiB perl-mro x86_64 1.29-519.fc42 updates 41.5 KiB perl-overload noarch 1.37-519.fc42 updates 71.5 KiB perl-overloading noarch 0.02-519.fc42 updates 4.8 KiB perl-parent noarch 1:0.244-2.fc42 fedora 10.3 KiB perl-podlators noarch 1:6.0.2-3.fc42 fedora 317.5 KiB perl-vars noarch 1.05-519.fc42 updates 3.9 KiB pixman x86_64 0.46.2-1.fc42 updates 710.3 KiB python-pip-wheel noarch 24.3.1-5.fc42 updates 1.2 MiB python3 x86_64 3.13.7-1.fc42 updates 28.7 KiB python3-libs x86_64 3.13.7-1.fc42 updates 40.1 MiB rhash x86_64 1.4.5-2.fc42 fedora 351.0 KiB spirv-tools-libs x86_64 2025.2-2.fc42 updates 5.8 MiB systemd-pam x86_64 257.9-2.fc42 updates 1.1 MiB systemd-rpm-macros noarch 257.9-2.fc42 updates 10.7 KiB systemd-shared x86_64 257.9-2.fc42 updates 4.6 MiB ttmkfdir x86_64 3.0.9-72.fc42 fedora 118.5 KiB tzdata noarch 2025b-1.fc42 fedora 1.6 MiB tzdata-java noarch 2025b-1.fc42 fedora 100.1 KiB vim-filesystem noarch 2:9.1.1818-1.fc42 updates 40.0 B xcb-util x86_64 0.4.1-7.fc42 fedora 26.3 KiB xcb-util-image x86_64 0.4.1-7.fc42 fedora 22.2 KiB xcb-util-keysyms x86_64 0.4.1-7.fc42 fedora 16.7 KiB xcb-util-renderutil x86_64 0.3.10-7.fc42 fedora 24.4 KiB xcb-util-wm x86_64 0.4.2-7.fc42 fedora 81.2 KiB xkeyboard-config noarch 2.44-1.fc42 fedora 6.6 MiB xml-common noarch 0.6.3-66.fc42 fedora 78.4 KiB xorg-x11-fonts-Type1 noarch 7.5-40.fc42 fedora 863.3 KiB zlib-ng-compat-devel x86_64 2.2.5-2.fc42 updates 107.0 KiB Transaction Summary: Installing: 281 packages Total size of inbound packages is 6 GiB. Need to download 6 GiB. After this operation, 12 GiB extra will be used (install 12 GiB, remove 0 B). [ 1/281] cuda-12-9-0:12.9.1-1.x86_64 100% | 54.0 KiB/s | 7.4 KiB | 00m00s [ 2/281] ccache-0:4.10.2-2.fc42.x86_64 100% | 3.0 MiB/s | 679.0 KiB | 00m00s [ 3/281] cmake-0:3.31.6-2.fc42.x86_64 100% | 27.8 MiB/s | 12.2 MiB | 00m00s [ 4/281] git-0:2.51.0-2.fc42.x86_64 100% | 88.1 KiB/s | 40.8 KiB | 00m00s [ 5/281] golang-0:1.24.8-1.fc42.x86_64 100% | 1.1 MiB/s | 670.5 KiB | 00m01s [ 6/281] fmt-0:11.1.4-1.fc42.x86_64 100% | 4.4 MiB/s | 99.8 KiB | 00m00s [ 7/281] hiredis-0:1.2.0-6.fc42.x86_64 100% | 1.8 MiB/s | 50.7 KiB | 00m00s [ 8/281] cmake-data-0:3.31.6-2.fc42.no 100% | 13.5 MiB/s | 2.5 MiB | 00m00s [ 9/281] cmake-filesystem-0:3.31.6-2.f 100% | 879.5 KiB/s | 17.6 KiB | 00m00s [ 10/281] jsoncpp-0:1.9.6-1.fc42.x86_64 100% | 4.2 MiB/s | 103.5 KiB | 00m00s [ 11/281] make-1:4.4.1-10.fc42.x86_64 100% | 19.1 MiB/s | 587.0 KiB | 00m00s [ 12/281] rhash-0:1.4.5-2.fc42.x86_64 100% | 2.3 MiB/s | 198.7 KiB | 00m00s [ 13/281] systemd-0:257.9-2.fc42.x86_64 100% | 4.9 MiB/s | 4.0 MiB | 00m01s [ 14/281] gcc-c++-0:15.2.1-1.fc42.x86_6 100% | 11.0 MiB/s | 15.3 MiB | 00m01s [ 15/281] cuda-toolkit-12-9-0:12.9.1-1. 100% | 93.1 KiB/s | 8.8 KiB | 00m00s [ 16/281] libmpc-0:1.3.1-7.fc42.x86_64 100% | 3.5 MiB/s | 70.9 KiB | 00m00s [ 17/281] cuda-runtime-12-9-0:12.9.1-1. 100% | 39.1 KiB/s | 7.3 KiB | 00m00s [ 18/281] perl-Getopt-Long-1:2.58-3.fc4 100% | 936.9 KiB/s | 63.7 KiB | 00m00s [ 19/281] perl-PathTools-0:3.91-513.fc4 100% | 1.4 MiB/s | 87.3 KiB | 00m00s [ 20/281] perl-TermReadKey-0:2.38-24.fc 100% | 1.0 MiB/s | 35.4 KiB | 00m00s [ 21/281] git-core-0:2.51.0-2.fc42.x86_ 100% | 18.3 MiB/s | 5.0 MiB | 00m00s [ 22/281] cuda-demo-suite-12-9-0:12.9.7 100% | 6.1 MiB/s | 5.3 MiB | 00m01s [ 23/281] git-core-doc-0:2.51.0-2.fc42. 100% | 13.4 MiB/s | 3.0 MiB | 00m00s [ 24/281] perl-Git-0:2.51.0-2.fc42.noar 100% | 642.1 KiB/s | 37.9 KiB | 00m00s [ 25/281] gcc-0:15.2.1-1.fc42.x86_64 100% | 38.6 MiB/s | 39.4 MiB | 00m01s [ 26/281] dbus-1:1.16.0-3.fc42.x86_64 100% | 387.9 KiB/s | 7.8 KiB | 00m00s [ 27/281] libseccomp-0:2.5.5-2.fc41.x86 100% | 1.5 MiB/s | 70.2 KiB | 00m00s [ 28/281] systemd-pam-0:257.9-2.fc42.x8 100% | 5.0 MiB/s | 411.6 KiB | 00m00s [ 29/281] golang-bin-0:1.24.8-1.fc42.x8 100% | 53.7 MiB/s | 29.4 MiB | 00m01s [ 30/281] systemd-shared-0:257.9-2.fc42 100% | 17.5 MiB/s | 1.8 MiB | 00m00s [ 31/281] emacs-filesystem-1:30.0-4.fc4 100% | 367.8 KiB/s | 7.4 KiB | 00m00s [ 32/281] cuda-libraries-12-9-0:12.9.1- 100% | 69.6 KiB/s | 7.7 KiB | 00m00s [ 33/281] cuda-compiler-12-9-0:12.9.1-1 100% | 59.5 KiB/s | 7.4 KiB | 00m00s [ 34/281] golang-src-0:1.24.8-1.fc42.no 100% | 16.7 MiB/s | 13.1 MiB | 00m01s [ 35/281] cuda-documentation-12-9-0:12. 100% | 857.3 KiB/s | 131.2 KiB | 00m00s [ 36/281] cuda-libraries-devel-12-9-0:1 100% | 42.7 KiB/s | 7.9 KiB | 00m00s [ 37/281] cuda-tools-12-9-0:12.9.1-1.x8 100% | 72.9 KiB/s | 7.3 KiB | 00m00s [ 38/281] cuda-nvml-devel-12-9-0:12.9.7 100% | 1.6 MiB/s | 201.2 KiB | 00m00s [ 39/281] perl-Exporter-0:5.78-512.fc42 100% | 885.4 KiB/s | 31.0 KiB | 00m00s [ 40/281] perl-Text-ParseWords-0:3.31-5 100% | 279.3 KiB/s | 16.5 KiB | 00m00s [ 41/281] perl-Pod-Usage-4:2.05-1.fc42. 100% | 375.1 KiB/s | 40.5 KiB | 00m00s [ 42/281] perl-constant-0:1.33-513.fc42 100% | 696.5 KiB/s | 23.0 KiB | 00m00s [ 43/281] perl-Error-1:0.17030-1.fc42.n 100% | 602.6 KiB/s | 40.4 KiB | 00m00s [ 44/281] perl-Carp-0:1.54-512.fc42.noa 100% | 331.7 KiB/s | 28.9 KiB | 00m00s [ 45/281] cpp-0:15.2.1-1.fc42.x86_64 100% | 37.0 MiB/s | 12.9 MiB | 00m00s [ 46/281] cuda-cudart-12-9-0:12.9.79-1. 100% | 1.4 MiB/s | 236.8 KiB | 00m00s [ 47/281] cuda-opencl-12-9-0:12.9.19-1. 100% | 339.0 KiB/s | 34.2 KiB | 00m00s [ 48/281] cuda-nvrtc-12-9-0:12.9.86-1.x 100% | 25.7 MiB/s | 84.8 MiB | 00m03s [ 49/281] libcufile-12-9-0:1.14.1.1-1.x 100% | 7.0 MiB/s | 1.2 MiB | 00m00s [ 50/281] libcurand-12-9-0:10.3.10.19-1 100% | 32.7 MiB/s | 63.9 MiB | 00m02s [ 51/281] libcufft-12-9-0:11.4.1.4-1.x8 100% | 23.9 MiB/s | 191.7 MiB | 00m08s [ 52/281] libcusolver-12-9-0:11.7.5.82- 100% | 27.3 MiB/s | 324.9 MiB | 00m12s [ 53/281] libcublas-12-9-0:12.9.1.4-1.x 100% | 26.5 MiB/s | 555.4 MiB | 00m21s [ 54/281] libnvfatbin-12-9-0:12.9.82-1. 100% | 5.2 MiB/s | 940.1 KiB | 00m00s [ 55/281] libcusparse-12-9-0:12.5.10.65 100% | 24.5 MiB/s | 351.7 MiB | 00m14s [ 56/281] libnvjitlink-12-9-0:12.9.86-1 100% | 27.1 MiB/s | 37.6 MiB | 00m01s [ 57/281] cuda-cuobjdump-12-9-0:12.9.82 100% | 2.2 MiB/s | 277.9 KiB | 00m00s [ 58/281] libnvjpeg-12-9-0:12.4.0.76-1. 100% | 14.5 MiB/s | 5.1 MiB | 00m00s [ 59/281] cuda-cuxxfilt-12-9-0:12.9.82- 100% | 1.8 MiB/s | 282.8 KiB | 00m00s [ 60/281] cuda-nvprune-12-9-0:12.9.82-1 100% | 374.4 KiB/s | 76.0 KiB | 00m00s [ 61/281] cuda-cccl-12-9-0:12.9.27-1.x8 100% | 6.2 MiB/s | 1.7 MiB | 00m00s [ 62/281] cuda-cudart-devel-12-9-0:12.9 100% | 6.6 MiB/s | 3.0 MiB | 00m00s [ 63/281] cuda-driver-devel-12-9-0:12.9 100% | 439.5 KiB/s | 43.1 KiB | 00m00s [ 64/281] libnpp-12-9-0:12.4.1.87-1.x86 100% | 35.3 MiB/s | 271.1 MiB | 00m08s [ 65/281] cuda-opencl-devel-12-9-0:12.9 100% | 311.0 KiB/s | 119.4 KiB | 00m00s [ 66/281] cuda-profiler-api-12-9-0:12.9 100% | 218.6 KiB/s | 26.2 KiB | 00m00s [ 67/281] cuda-sandbox-devel-12-9-0:12. 100% | 475.7 KiB/s | 44.2 KiB | 00m00s [ 68/281] cuda-nvrtc-devel-12-9-0:12.9. 100% | 37.4 MiB/s | 74.2 MiB | 00m02s [ 69/281] cuda-nvcc-12-9-0:12.9.86-1.x8 100% | 26.6 MiB/s | 111.3 MiB | 00m04s [ 70/281] libcufile-devel-12-9-0:1.14.1 100% | 14.3 MiB/s | 5.2 MiB | 00m00s [ 71/281] libcurand-devel-12-9-0:10.3.1 100% | 40.6 MiB/s | 64.2 MiB | 00m02s [ 72/281] libcusolver-devel-12-9-0:11.7 100% | 32.7 MiB/s | 213.1 MiB | 00m07s [ 73/281] libcufft-devel-12-9-0:11.4.1. 100% | 30.4 MiB/s | 385.6 MiB | 00m13s [ 74/281] libcublas-devel-12-9-0:12.9.1 100% | 34.3 MiB/s | 630.3 MiB | 00m18s [ 75/281] libnvfatbin-devel-12-9-0:12.9 100% | 8.8 MiB/s | 863.8 KiB | 00m00s [ 76/281] libnvjitlink-devel-12-9-0:12. 100% | 32.2 MiB/s | 36.1 MiB | 00m01s [ 77/281] libnvjpeg-devel-12-9-0:12.4.0 100% | 12.9 MiB/s | 4.9 MiB | 00m00s [ 78/281] cuda-command-line-tools-12-9- 100% | 68.0 KiB/s | 7.5 KiB | 00m00s [ 79/281] cuda-visual-tools-12-9-0:12.9 100% | 71.9 KiB/s | 7.5 KiB | 00m00s [ 80/281] libnpp-devel-12-9-0:12.4.1.87 100% | 30.9 MiB/s | 268.0 MiB | 00m09s [ 81/281] perl-Pod-Perldoc-0:3.28.01-51 100% | 709.2 KiB/s | 85.8 KiB | 00m00s [ 82/281] gds-tools-12-9-0:1.14.1.1-1.x 100% | 24.3 MiB/s | 42.0 MiB | 00m02s [ 83/281] perl-podlators-1:6.0.2-3.fc42 100% | 1.3 MiB/s | 128.6 KiB | 00m00s [ 84/281] cuda-crt-12-9-0:12.9.86-1.x86 100% | 1.3 MiB/s | 119.7 KiB | 00m00s [ 85/281] cuda-cupti-12-9-0:12.9.79-1.x 100% | 27.8 MiB/s | 30.3 MiB | 00m01s [ 86/281] cuda-nvvm-12-9-0:12.9.86-1.x8 100% | 44.2 MiB/s | 57.6 MiB | 00m01s [ 87/281] cuda-nvdisasm-12-9-0:12.9.88- 100% | 14.1 MiB/s | 5.4 MiB | 00m00s [ 88/281] cuda-gdb-12-9-0:12.9.79-1.x86 100% | 37.4 MiB/s | 33.4 MiB | 00m01s [ 89/281] cuda-nvprof-12-9-0:12.9.79-1. 100% | 13.0 MiB/s | 4.9 MiB | 00m00s [ 90/281] cuda-nvtx-12-9-0:12.9.79-1.x8 100% | 1.0 MiB/s | 91.8 KiB | 00m00s [ 91/281] cuda-sanitizer-12-9-0:12.9.79 100% | 21.3 MiB/s | 14.3 MiB | 00m01s [ 92/281] cuda-nsight-compute-12-9-0:12 100% | 44.6 KiB/s | 9.9 KiB | 00m00s [ 93/281] cuda-nsight-systems-12-9-0:12 100% | 58.4 KiB/s | 9.2 KiB | 00m00s [ 94/281] cuda-nsight-12-9-0:12.9.79-1. 100% | 30.1 MiB/s | 113.2 MiB | 00m04s [ 95/281] numactl-libs-0:2.0.19-2.fc42. 100% | 434.4 KiB/s | 31.3 KiB | 00m00s [ 96/281] groff-base-0:1.23.0-8.fc42.x8 100% | 5.6 MiB/s | 1.1 MiB | 00m00s [ 97/281] perl-File-Temp-1:0.231.100-51 100% | 3.0 MiB/s | 59.2 KiB | 00m00s [ 98/281] perl-HTTP-Tiny-0:0.090-2.fc42 100% | 3.1 MiB/s | 56.5 KiB | 00m00s [ 99/281] perl-Pod-Simple-1:3.45-512.fc 100% | 8.2 MiB/s | 219.0 KiB | 00m00s [100/281] perl-parent-1:0.244-2.fc42.no 100% | 801.3 KiB/s | 15.2 KiB | 00m00s [101/281] perl-Term-ANSIColor-0:5.01-51 100% | 2.5 MiB/s | 47.7 KiB | 00m00s [102/281] perl-Term-Cap-0:1.18-512.fc42 100% | 1.1 MiB/s | 22.2 KiB | 00m00s [103/281] cuda-nvvp-12-9-0:12.9.79-1.x8 100% | 26.1 MiB/s | 115.0 MiB | 00m04s [104/281] perl-File-Path-0:2.18-512.fc4 100% | 1.9 MiB/s | 35.2 KiB | 00m00s [105/281] perl-IO-Socket-SSL-0:2.089-2. 100% | 7.5 MiB/s | 230.2 KiB | 00m00s [106/281] perl-MIME-Base64-0:3.16-512.f 100% | 1.2 MiB/s | 29.9 KiB | 00m00s [107/281] perl-Net-SSLeay-0:1.94-8.fc42 100% | 5.3 MiB/s | 376.0 KiB | 00m00s [108/281] perl-Socket-4:2.038-512.fc42. 100% | 1.9 MiB/s | 54.8 KiB | 00m00s [109/281] perl-Time-Local-2:1.350-512.f 100% | 1.5 MiB/s | 34.5 KiB | 00m00s [110/281] perl-Pod-Escapes-1:1.07-512.f 100% | 825.7 KiB/s | 19.8 KiB | 00m00s [111/281] perl-Text-Tabs+Wrap-0:2024.00 100% | 1.1 MiB/s | 21.8 KiB | 00m00s [112/281] ncurses-0:6.5-5.20250125.fc42 100% | 4.0 MiB/s | 424.5 KiB | 00m00s [113/281] perl-IO-Socket-IP-0:0.43-2.fc 100% | 1.6 MiB/s | 42.4 KiB | 00m00s [114/281] perl-URI-0:5.31-2.fc42.noarch 100% | 3.2 MiB/s | 140.7 KiB | 00m00s [115/281] perl-Data-Dumper-0:2.189-513. 100% | 176.5 KiB/s | 56.7 KiB | 00m00s [116/281] perl-MIME-Base32-0:1.303-23.f 100% | 892.0 KiB/s | 20.5 KiB | 00m00s [117/281] perl-libnet-0:3.15-513.fc42.n 100% | 2.0 MiB/s | 128.4 KiB | 00m00s [118/281] perl-Digest-MD5-0:2.59-6.fc42 100% | 1.3 MiB/s | 36.0 KiB | 00m00s [119/281] perl-Digest-0:1.20-512.fc42.n 100% | 959.0 KiB/s | 24.9 KiB | 00m00s [120/281] dbus-broker-0:36-6.fc42.x86_6 100% | 236.8 KiB/s | 172.4 KiB | 00m01s [121/281] dbus-common-1:1.16.0-3.fc42.n 100% | 657.0 KiB/s | 14.5 KiB | 00m00s [122/281] libcusparse-devel-12-9-0:12.5 100% | 32.1 MiB/s | 710.9 MiB | 00m22s [123/281] perl-interpreter-4:5.40.3-519 100% | 151.6 KiB/s | 72.0 KiB | 00m00s [124/281] perl-Errno-0:1.38-519.fc42.x8 100% | 220.5 KiB/s | 14.8 KiB | 00m00s [125/281] go-filesystem-0:3.8.0-1.fc42. 100% | 124.7 KiB/s | 8.9 KiB | 00m00s [126/281] perl-libs-4:5.40.3-519.fc42.x 100% | 1.4 MiB/s | 2.3 MiB | 00m02s [127/281] expat-0:2.7.2-1.fc42.x86_64 100% | 462.8 KiB/s | 119.0 KiB | 00m00s [128/281] less-0:679-1.fc42.x86_64 100% | 867.8 KiB/s | 195.3 KiB | 00m00s [129/281] libedit-0:3.1-55.20250104cvs. 100% | 2.6 MiB/s | 105.3 KiB | 00m00s [130/281] libfido2-0:1.15.0-3.fc42.x86_ 100% | 1.7 MiB/s | 98.4 KiB | 00m00s [131/281] openssh-0:9.9p1-11.fc42.x86_6 100% | 2.1 MiB/s | 353.6 KiB | 00m00s [132/281] libcbor-0:0.11.0-3.fc42.x86_6 100% | 1.3 MiB/s | 33.3 KiB | 00m00s [133/281] openssh-clients-0:9.9p1-11.fc 100% | 2.4 MiB/s | 767.0 KiB | 00m00s [134/281] perl-File-Basename-0:2.86-519 100% | 283.1 KiB/s | 17.0 KiB | 00m00s [135/281] perl-IPC-Open3-0:1.22-519.fc4 100% | 349.8 KiB/s | 21.7 KiB | 00m00s [136/281] perl-lib-0:0.65-519.fc42.x86_ 100% | 246.3 KiB/s | 14.8 KiB | 00m00s [137/281] glibc-devel-0:2.41-11.fc42.x8 100% | 2.8 MiB/s | 623.2 KiB | 00m00s [138/281] libstdc++-devel-0:15.2.1-1.fc 100% | 8.1 MiB/s | 2.9 MiB | 00m00s [139/281] perl-Storable-1:3.32-512.fc42 100% | 897.3 KiB/s | 99.6 KiB | 00m00s [140/281] perl-Encode-4:3.21-512.fc42.x 100% | 4.6 MiB/s | 1.1 MiB | 00m00s [141/281] perl-Fcntl-0:1.18-519.fc42.x8 100% | 529.5 KiB/s | 29.7 KiB | 00m00s [142/281] perl-POSIX-0:2.20-519.fc42.x8 100% | 1.2 MiB/s | 97.4 KiB | 00m00s [143/281] perl-FileHandle-0:2.05-519.fc 100% | 255.5 KiB/s | 15.3 KiB | 00m00s [144/281] perl-IO-0:1.55-519.fc42.x86_6 100% | 1.0 MiB/s | 81.6 KiB | 00m00s [145/281] perl-Symbol-0:1.09-519.fc42.n 100% | 234.0 KiB/s | 14.0 KiB | 00m00s [146/281] perl-Scalar-List-Utils-5:1.70 100% | 968.8 KiB/s | 74.6 KiB | 00m00s [147/281] perl-base-0:2.27-519.fc42.noa 100% | 291.7 KiB/s | 16.0 KiB | 00m00s [148/281] perl-DynaLoader-0:1.56-519.fc 100% | 345.1 KiB/s | 25.9 KiB | 00m00s [149/281] perl-overload-0:1.37-519.fc42 100% | 290.8 KiB/s | 45.4 KiB | 00m00s [150/281] perl-vars-0:1.05-519.fc42.noa 100% | 207.0 KiB/s | 12.8 KiB | 00m00s [151/281] perl-AutoLoader-0:5.74-519.fc 100% | 334.3 KiB/s | 21.1 KiB | 00m00s [152/281] perl-if-0:0.61.000-519.fc42.n 100% | 212.8 KiB/s | 13.8 KiB | 00m00s [153/281] perl-Getopt-Std-0:1.14-519.fc 100% | 267.5 KiB/s | 15.5 KiB | 00m00s [154/281] perl-B-0:1.89-519.fc42.x86_64 100% | 2.9 MiB/s | 176.5 KiB | 00m00s [155/281] vim-filesystem-2:9.1.1818-1.f 100% | 266.5 KiB/s | 15.5 KiB | 00m00s [156/281] libuv-1:1.51.0-1.fc42.x86_64 100% | 4.0 MiB/s | 266.3 KiB | 00m00s [157/281] cuda-toolkit-12-9-config-comm 100% | 129.5 KiB/s | 7.8 KiB | 00m00s [158/281] cuda-toolkit-config-common-0: 100% | 84.6 KiB/s | 8.0 KiB | 00m00s [159/281] cuda-toolkit-12-config-common 100% | 110.7 KiB/s | 8.0 KiB | 00m00s [160/281] nvidia-driver-cuda-3:580.95.0 100% | 5.5 MiB/s | 514.2 KiB | 00m00s [161/281] openssl-1:3.2.6-2.fc42.x86_64 100% | 3.2 MiB/s | 1.1 MiB | 00m00s [162/281] nvidia-driver-cuda-libs-3:580 100% | 32.3 MiB/s | 58.6 MiB | 00m02s [163/281] nvidia-persistenced-3:580.95. 100% | 264.8 KiB/s | 32.6 KiB | 00m00s [164/281] opencl-filesystem-0:1.0-22.fc 100% | 160.7 KiB/s | 7.6 KiB | 00m00s [165/281] libnvidia-cfg-3:580.95.05-1.f 100% | 1.3 MiB/s | 152.4 KiB | 00m00s [166/281] libnvidia-gpucomp-3:580.95.05 100% | 21.2 MiB/s | 19.1 MiB | 00m01s [167/281] nvidia-kmod-common-3:580.95.0 100% | 25.0 MiB/s | 72.3 MiB | 00m03s [168/281] libnvidia-ml-3:580.95.05-1.fc 100% | 5.6 MiB/s | 665.6 KiB | 00m00s [169/281] nvidia-modprobe-3:580.95.05-1 100% | 360.5 KiB/s | 28.5 KiB | 00m00s [170/281] cairo-0:1.18.2-3.fc42.x86_64 100% | 4.0 MiB/s | 731.8 KiB | 00m00s [171/281] fontconfig-0:2.16.0-2.fc42.x8 100% | 1.8 MiB/s | 272.0 KiB | 00m00s [172/281] freetype-0:2.13.3-2.fc42.x86_ 100% | 12.3 MiB/s | 415.5 KiB | 00m00s [173/281] libXext-0:1.3.6-3.fc42.x86_64 100% | 1.8 MiB/s | 39.3 KiB | 00m00s [174/281] libXrender-0:0.9.12-2.fc42.x8 100% | 1.5 MiB/s | 26.9 KiB | 00m00s [175/281] libpng-2:1.6.44-2.fc42.x86_64 100% | 4.3 MiB/s | 123.9 KiB | 00m00s [176/281] libxcb-0:1.17.0-5.fc42.x86_64 100% | 8.6 MiB/s | 239.0 KiB | 00m00s [177/281] default-fonts-core-sans-0:4.2 100% | 1.5 MiB/s | 31.3 KiB | 00m00s [178/281] xml-common-0:0.6.3-66.fc42.no 100% | 1.6 MiB/s | 31.2 KiB | 00m00s [179/281] libXau-0:1.0.12-2.fc42.x86_64 100% | 1.8 MiB/s | 33.6 KiB | 00m00s [180/281] abattis-cantarell-vf-fonts-0: 100% | 5.1 MiB/s | 120.3 KiB | 00m00s [181/281] google-noto-sans-vf-fonts-0:2 100% | 8.6 MiB/s | 614.5 KiB | 00m00s [182/281] harfbuzz-0:10.4.0-1.fc42.x86_ 100% | 8.2 MiB/s | 1.1 MiB | 00m00s [183/281] graphite2-0:1.3.14-18.fc42.x8 100% | 3.5 MiB/s | 95.8 KiB | 00m00s [184/281] google-noto-fonts-common-0:20 100% | 588.7 KiB/s | 17.1 KiB | 00m00s [185/281] libXcomposite-0:0.4.6-5.fc42. 100% | 838.1 KiB/s | 24.3 KiB | 00m00s [186/281] libXi-0:1.8.2-2.fc42.x86_64 100% | 1.4 MiB/s | 40.5 KiB | 00m00s [187/281] libXtst-0:1.2.5-2.fc42.x86_64 100% | 938.7 KiB/s | 20.7 KiB | 00m00s [188/281] xorg-x11-fonts-Type1-0:7.5-40 100% | 11.8 MiB/s | 506.3 KiB | 00m00s [189/281] java-21-openjdk-1:21.0.8.0.9- 100% | 657.0 KiB/s | 410.6 KiB | 00m01s [190/281] mkfontscale-0:1.2.3-2.fc42.x8 100% | 1.0 MiB/s | 31.6 KiB | 00m00s [191/281] ttmkfdir-0:3.0.9-72.fc42.x86_ 100% | 164.5 KiB/s | 56.2 KiB | 00m00s [192/281] javapackages-filesystem-0:6.4 100% | 563.6 KiB/s | 14.1 KiB | 00m00s [193/281] tzdata-java-0:2025b-1.fc42.no 100% | 147.4 KiB/s | 46.4 KiB | 00m00s [194/281] libfontenc-0:1.1.8-3.fc42.x86 100% | 410.2 KiB/s | 32.4 KiB | 00m00s [195/281] libtirpc-0:1.3.7-0.fc42.x86_6 100% | 730.0 KiB/s | 94.2 KiB | 00m00s [196/281] nsight-compute-2025.2.1-0:202 100% | 33.9 MiB/s | 416.5 MiB | 00m12s [197/281] kmod-nvidia-open-dkms-3:580.9 100% | 18.7 MiB/s | 15.4 MiB | 00m01s [198/281] kmod-0:33-3.fc42.x86_64 100% | 458.1 KiB/s | 123.7 KiB | 00m00s [199/281] dkms-0:3.2.2-1.fc42.noarch 100% | 207.8 KiB/s | 88.3 KiB | 00m00s [200/281] python3-0:3.13.7-1.fc42.x86_6 100% | 105.3 KiB/s | 30.6 KiB | 00m00s [201/281] libb2-0:0.98.1-13.fc42.x86_64 100% | 478.9 KiB/s | 25.4 KiB | 00m00s [202/281] tzdata-0:2025b-1.fc42.noarch 100% | 2.0 MiB/s | 714.0 KiB | 00m00s [203/281] mpdecimal-0:4.0.1-1.fc42.x86_ 100% | 554.6 KiB/s | 97.1 KiB | 00m00s [204/281] python-pip-wheel-0:24.3.1-5.f 100% | 1.0 MiB/s | 1.2 MiB | 00m01s [205/281] perl-mro-0:1.29-519.fc42.x86_ 100% | 358.1 KiB/s | 29.7 KiB | 00m00s [206/281] perl-overloading-0:0.02-519.f 100% | 179.4 KiB/s | 12.7 KiB | 00m00s [207/281] perl-locale-0:1.12-519.fc42.n 100% | 87.4 KiB/s | 13.5 KiB | 00m00s [208/281] perl-File-stat-0:1.14-519.fc4 100% | 42.2 KiB/s | 16.9 KiB | 00m00s [209/281] perl-SelectSaver-0:1.02-519.f 100% | 82.5 KiB/s | 11.6 KiB | 00m00s [210/281] perl-Class-Struct-0:0.68-519. 100% | 223.5 KiB/s | 21.9 KiB | 00m00s [211/281] alsa-lib-0:1.2.14-3.fc42.x86_ 100% | 693.5 KiB/s | 531.3 KiB | 00m01s [212/281] cups-libs-1:2.4.14-2.fc42.x86 100% | 879.9 KiB/s | 261.3 KiB | 00m00s [213/281] avahi-libs-0:0.9~rc2-2.fc42.x 100% | 832.9 KiB/s | 69.1 KiB | 00m00s [214/281] cups-filesystem-1:2.4.14-2.fc 100% | 189.5 KiB/s | 12.7 KiB | 00m00s [215/281] dbus-libs-1:1.16.0-3.fc42.x86 100% | 2.5 MiB/s | 149.4 KiB | 00m00s [216/281] lksctp-tools-0:1.0.21-1.fc42. 100% | 658.9 KiB/s | 96.9 KiB | 00m00s [217/281] nss-0:3.116.0-1.fc42.x86_64 100% | 1.5 MiB/s | 712.9 KiB | 00m00s [218/281] nss-softokn-0:3.116.0-1.fc42. 100% | 1.5 MiB/s | 425.7 KiB | 00m00s [219/281] python3-libs-0:3.13.7-1.fc42. 100% | 1.8 MiB/s | 9.2 MiB | 00m05s [220/281] nss-sysinit-0:3.116.0-1.fc42. 100% | 304.7 KiB/s | 18.9 KiB | 00m00s [221/281] nss-softokn-freebl-0:3.116.0- 100% | 1.5 MiB/s | 329.0 KiB | 00m00s [222/281] libX11-common-0:1.8.12-1.fc42 100% | 1.3 MiB/s | 175.9 KiB | 00m00s [223/281] libX11-0:1.8.12-1.fc42.x86_64 100% | 2.9 MiB/s | 655.4 KiB | 00m00s [224/281] libxcrypt-devel-0:4.4.38-7.fc 100% | 425.7 KiB/s | 29.4 KiB | 00m00s [225/281] elfutils-libelf-devel-0:0.193 100% | 118.6 KiB/s | 47.6 KiB | 00m00s [226/281] kernel-headers-0:6.16.2-200.f 100% | 3.4 MiB/s | 1.7 MiB | 00m01s [227/281] nettle-0:3.10.1-1.fc42.x86_64 100% | 4.1 MiB/s | 424.4 KiB | 00m00s [228/281] gnutls-0:3.8.10-1.fc42.x86_64 100% | 2.2 MiB/s | 1.4 MiB | 00m01s [229/281] fonts-filesystem-1:2.0.5-22.f 100% | 138.5 KiB/s | 8.7 KiB | 00m00s [230/281] glib2-0:2.84.4-1.fc42.x86_64 100% | 5.1 MiB/s | 3.1 MiB | 00m01s [231/281] nspr-0:4.37.0-3.fc42.x86_64 100% | 1.7 MiB/s | 137.6 KiB | 00m00s [232/281] pixman-0:0.46.2-1.fc42.x86_64 100% | 1.2 MiB/s | 292.7 KiB | 00m00s [233/281] nss-util-0:3.116.0-1.fc42.x86 100% | 529.2 KiB/s | 85.7 KiB | 00m00s [234/281] libzstd-devel-0:1.5.7-1.fc42. 100% | 577.2 KiB/s | 53.1 KiB | 00m00s [235/281] zlib-ng-compat-devel-0:2.2.5- 100% | 266.1 KiB/s | 38.3 KiB | 00m00s [236/281] libSM-0:1.2.5-2.fc42.x86_64 100% | 673.3 KiB/s | 44.4 KiB | 00m00s [237/281] libXdamage-0:1.1.6-5.fc42.x86 100% | 81.8 KiB/s | 23.4 KiB | 00m00s [238/281] libXrandr-0:1.5.4-5.fc42.x86_ 100% | 455.3 KiB/s | 27.8 KiB | 00m00s [239/281] libglvnd-egl-1:1.7.0-7.fc42.x 100% | 321.2 KiB/s | 36.3 KiB | 00m00s [240/281] libglvnd-opengl-1:1.7.0-7.fc4 100% | 99.9 KiB/s | 37.4 KiB | 00m00s [241/281] libxkbcommon-0:1.8.1-1.fc42.x 100% | 571.9 KiB/s | 154.4 KiB | 00m00s [242/281] libxkbcommon-x11-0:1.8.1-1.fc 100% | 397.4 KiB/s | 22.3 KiB | 00m00s [243/281] xcb-util-image-0:0.4.1-7.fc42 100% | 383.5 KiB/s | 18.8 KiB | 00m00s [244/281] xcb-util-keysyms-0:0.4.1-7.fc 100% | 220.3 KiB/s | 14.1 KiB | 00m00s [245/281] xcb-util-renderutil-0:0.3.10- 100% | 390.6 KiB/s | 17.2 KiB | 00m00s [246/281] xcb-util-wm-0:0.4.2-7.fc42.x8 100% | 631.5 KiB/s | 30.3 KiB | 00m00s [247/281] libICE-0:1.1.2-2.fc42.x86_64 100% | 838.1 KiB/s | 78.8 KiB | 00m00s [248/281] libXfixes-0:6.0.1-5.fc42.x86_ 100% | 239.1 KiB/s | 19.1 KiB | 00m00s [249/281] libglvnd-1:1.7.0-7.fc42.x86_6 100% | 937.7 KiB/s | 114.4 KiB | 00m00s [250/281] xkeyboard-config-0:2.44-1.fc4 100% | 1.9 MiB/s | 978.5 KiB | 00m00s [251/281] xcb-util-0:0.4.1-7.fc42.x86_6 100% | 668.2 KiB/s | 18.0 KiB | 00m00s [252/281] OpenCL-ICD-Loader-0:3.0.6-2.2 100% | 940.3 KiB/s | 28.2 KiB | 00m00s [253/281] java-21-openjdk-headless-1:21 100% | 3.7 MiB/s | 46.1 MiB | 00m12s [254/281] mesa-libEGL-0:25.1.9-1.fc42.x 100% | 1.6 MiB/s | 126.9 KiB | 00m00s [255/281] libX11-xcb-0:1.8.12-1.fc42.x8 100% | 6.9 KiB/s | 11.5 KiB | 00m02s [256/281] mesa-libgbm-0:25.1.9-1.fc42.x 100% | 148.3 KiB/s | 14.8 KiB | 00m00s [257/281] libxshmfence-0:1.3.2-6.fc42.x 100% | 577.4 KiB/s | 13.3 KiB | 00m00s [258/281] mesa-dri-drivers-0:25.1.9-1.f 100% | 9.7 MiB/s | 12.5 MiB | 00m01s [259/281] lm_sensors-libs-0:3.6.0-22.fc 100% | 919.2 KiB/s | 40.4 KiB | 00m00s [260/281] libdrm-0:2.4.126-1.fc42.x86_6 100% | 489.5 KiB/s | 161.5 KiB | 00m00s [261/281] libpciaccess-0:0.16-15.fc42.x 100% | 751.0 KiB/s | 26.3 KiB | 00m00s [262/281] libwayland-client-0:1.24.0-1. 100% | 134.6 KiB/s | 33.6 KiB | 00m00s [263/281] nsight-systems-2025.3.2-0:202 100% | 75.5 MiB/s | 400.7 MiB | 00m05s [264/281] mesa-filesystem-0:25.1.9-1.fc 100% | 8.0 KiB/s | 8.8 KiB | 00m01s [265/281] libwayland-server-0:1.24.0-1. 100% | 84.5 KiB/s | 41.5 KiB | 00m00s [266/281] llvm-filesystem-0:20.1.8-4.fc 100% | 31.7 KiB/s | 14.6 KiB | 00m00s [267/281] hwdata-0:0.400-1.fc42.noarch 100% | 1.2 MiB/s | 1.7 MiB | 00m01s [268/281] llvm-libs-0:20.1.8-4.fc42.x86 100% | 17.3 MiB/s | 33.5 MiB | 00m02s [269/281] gcc-plugin-annobin-0:15.2.1-1 100% | 606.3 KiB/s | 55.8 KiB | 00m00s [270/281] systemd-rpm-macros-0:257.9-2. 100% | 541.0 KiB/s | 34.1 KiB | 00m00s [271/281] cmake-rpm-macros-0:3.31.6-2.f 100% | 296.4 KiB/s | 16.9 KiB | 00m00s [272/281] pam-0:1.7.0-6.fc42.x86_64 100% | 6.7 MiB/s | 556.6 KiB | 00m00s [273/281] authselect-libs-0:1.5.1-1.fc4 100% | 1.7 MiB/s | 217.9 KiB | 00m00s [274/281] libnsl2-0:2.0.1-3.fc42.x86_64 100% | 1.5 MiB/s | 29.5 KiB | 00m00s [275/281] libpwquality-0:1.4.5-12.fc42. 100% | 4.3 MiB/s | 118.5 KiB | 00m00s [276/281] authselect-0:1.5.1-1.fc42.x86 100% | 2.0 MiB/s | 146.5 KiB | 00m00s [277/281] cracklib-0:2.9.11-7.fc42.x86_ 100% | 3.7 MiB/s | 91.6 KiB | 00m00s [278/281] gdbm-1:1.23-9.fc42.x86_64 100% | 2.7 MiB/s | 150.8 KiB | 00m00s [279/281] spirv-tools-libs-0:2025.2-2.f 100% | 685.0 KiB/s | 1.5 MiB | 00m02s [280/281] annobin-plugin-gcc-0:12.94-1. 100% | 10.2 MiB/s | 981.9 KiB | 00m00s [281/281] annobin-docs-0:12.94-1.fc42.n 100% | 786.3 KiB/s | 90.4 KiB | 00m00s -------------------------------------------------------------------------------- [281/281] Total 100% | 71.9 MiB/s | 5.9 GiB | 01m25s Running transaction [ 1/283] Verify package files 100% | 10.0 B/s | 281.0 B | 00m28s >>> Running %pretrans scriptlet: java-21-openjdk-headless-1:21.0.8.0.9-1.fc42.x8 >>> Finished %pretrans scriptlet: java-21-openjdk-headless-1:21.0.8.0.9-1.fc42.x >>> [RPM] /var/lib/mock/fedora-42-x86_64-1760542611.653638/root/var/cache/dnf/ht [ 2/283] Prepare transaction 100% | 833.0 B/s | 281.0 B | 00m00s [ 3/283] Installing cuda-toolkit-12-co 100% | 0.0 B/s | 316.0 B | 00m00s [ 4/283] Installing cuda-toolkit-12-9- 100% | 0.0 B/s | 124.0 B | 00m00s [ 5/283] Installing cuda-toolkit-confi 100% | 304.7 KiB/s | 312.0 B | 00m00s [ 6/283] Installing expat-0:2.7.2-1.fc 100% | 17.3 MiB/s | 300.7 KiB | 00m00s [ 7/283] Installing nspr-0:4.37.0-3.fc 100% | 154.9 MiB/s | 317.2 KiB | 00m00s [ 8/283] Installing nss-util-0:3.116.0 100% | 197.0 MiB/s | 201.8 KiB | 00m00s [ 9/283] Installing libX11-xcb-0:1.8.1 100% | 11.5 MiB/s | 11.8 KiB | 00m00s [ 10/283] Installing fonts-filesystem-1 100% | 0.0 B/s | 788.0 B | 00m00s [ 11/283] Installing libtirpc-0:1.3.7-0 100% | 98.0 MiB/s | 200.7 KiB | 00m00s [ 12/283] Installing libmpc-0:1.3.1-7.f 100% | 81.1 MiB/s | 166.1 KiB | 00m00s [ 13/283] Installing make-1:4.4.1-10.fc 100% | 75.0 MiB/s | 1.8 MiB | 00m00s [ 14/283] Installing cmake-filesystem-0 100% | 2.5 MiB/s | 7.6 KiB | 00m00s [ 15/283] Installing cuda-cudart-12-9-0 100% | 36.6 MiB/s | 787.3 KiB | 00m00s [ 16/283] Installing cuda-opencl-12-9-0 100% | 7.0 MiB/s | 93.4 KiB | 00m00s [ 17/283] Installing libcublas-12-9-0:1 100% | 202.0 MiB/s | 815.6 MiB | 00m04s [ 18/283] Installing libcufft-12-9-0:11 100% | 156.9 MiB/s | 277.2 MiB | 00m02s [ 19/283] Installing libcufile-12-9-0:1 100% | 95.2 MiB/s | 3.2 MiB | 00m00s [ 20/283] Installing libcurand-12-9-0:1 100% | 271.8 MiB/s | 159.3 MiB | 00m01s [ 21/283] Installing libcusolver-12-9-0 100% | 150.1 MiB/s | 470.6 MiB | 00m03s [ 22/283] Installing libcusparse-12-9-0 100% | 143.4 MiB/s | 463.0 MiB | 00m03s [ 23/283] Installing libnpp-12-9-0:12.4 100% | 146.7 MiB/s | 393.0 MiB | 00m03s [ 24/283] Installing libnvfatbin-12-9-0 100% | 95.8 MiB/s | 2.4 MiB | 00m00s [ 25/283] Installing libnvjitlink-12-9- 100% | 200.3 MiB/s | 91.6 MiB | 00m00s [ 26/283] Installing libnvjpeg-12-9-0:1 100% | 130.2 MiB/s | 9.0 MiB | 00m00s [ 27/283] Installing libwayland-server- 100% | 38.9 MiB/s | 79.7 KiB | 00m00s [ 28/283] Installing libglvnd-1:1.7.0-7 100% | 173.0 MiB/s | 531.6 KiB | 00m00s [ 29/283] Installing dbus-libs-1:1.16.0 100% | 114.1 MiB/s | 350.6 KiB | 00m00s [ 30/283] Installing alsa-lib-0:1.2.14- 100% | 55.5 MiB/s | 1.4 MiB | 00m00s [ 31/283] Installing libpng-2:1.6.44-2. 100% | 118.6 MiB/s | 242.9 KiB | 00m00s [ 32/283] Installing libnvidia-ml-3:580 100% | 311.2 MiB/s | 2.2 MiB | 00m00s [ 33/283] Installing libnvidia-cfg-3:58 100% | 188.7 MiB/s | 386.5 KiB | 00m00s [ 34/283] Installing libedit-0:3.1-55.2 100% | 120.0 MiB/s | 245.8 KiB | 00m00s [ 35/283] Installing cuda-nvprof-12-9-0 100% | 189.1 MiB/s | 10.6 MiB | 00m00s [ 36/283] Installing cuda-nvdisasm-12-9 100% | 253.2 MiB/s | 6.1 MiB | 00m00s [ 37/283] Installing cuda-cccl-12-9-0:1 100% | 103.6 MiB/s | 13.1 MiB | 00m00s [ 38/283] Installing cuda-nvrtc-12-9-0: 100% | 209.5 MiB/s | 216.9 MiB | 00m01s [ 39/283] Installing cuda-libraries-12- 100% | 0.0 B/s | 124.0 B | 00m00s [ 40/283] Installing cuda-nvml-devel-12 100% | 226.3 MiB/s | 1.4 MiB | 00m00s [ 41/283] Installing libseccomp-0:2.5.5 100% | 85.5 MiB/s | 175.2 KiB | 00m00s [ 42/283] Installing systemd-shared-0:2 100% | 244.2 MiB/s | 4.6 MiB | 00m00s [ 43/283] Installing cuda-nvrtc-devel-1 100% | 235.5 MiB/s | 248.0 MiB | 00m01s [ 44/283] Installing cuda-cudart-devel- 100% | 188.5 MiB/s | 8.5 MiB | 00m00s [ 45/283] Installing avahi-libs-0:0.9~r 100% | 90.9 MiB/s | 186.2 KiB | 00m00s [ 46/283] Installing libglvnd-opengl-1: 100% | 146.1 MiB/s | 149.6 KiB | 00m00s [ 47/283] Installing libnvjpeg-devel-12 100% | 164.7 MiB/s | 9.4 MiB | 00m00s [ 48/283] Installing libnvjitlink-devel 100% | 240.7 MiB/s | 127.6 MiB | 00m01s [ 49/283] Installing libnvfatbin-devel- 100% | 192.3 MiB/s | 2.3 MiB | 00m00s [ 50/283] Installing libnpp-devel-12-9- 100% | 151.1 MiB/s | 406.2 MiB | 00m03s [ 51/283] Installing libcusparse-devel- 100% | 171.4 MiB/s | 960.3 MiB | 00m06s [ 52/283] Installing libcusolver-devel- 100% | 155.6 MiB/s | 332.5 MiB | 00m02s [ 53/283] Installing libcurand-devel-12 100% | 279.6 MiB/s | 161.3 MiB | 00m01s [ 54/283] Installing libcufile-devel-12 100% | 303.4 MiB/s | 27.9 MiB | 00m00s [ 55/283] Installing libcufft-devel-12- 100% | 172.1 MiB/s | 567.3 MiB | 00m03s [ 56/283] Installing libcublas-devel-12 100% | 210.8 MiB/s | 1.2 GiB | 00m06s [ 57/283] Installing cuda-opencl-devel- 100% | 103.9 MiB/s | 744.4 KiB | 00m00s [ 58/283] Installing zlib-ng-compat-dev 100% | 53.0 MiB/s | 108.5 KiB | 00m00s [ 59/283] Installing cpp-0:15.2.1-1.fc4 100% | 246.4 MiB/s | 37.9 MiB | 00m00s [ 60/283] Installing libnsl2-0:2.0.1-3. 100% | 28.8 MiB/s | 59.0 KiB | 00m00s [ 61/283] Installing abattis-cantarell- 100% | 63.3 MiB/s | 194.4 KiB | 00m00s [ 62/283] Installing nss-softokn-freebl 100% | 166.1 MiB/s | 850.5 KiB | 00m00s [ 63/283] Installing nss-softokn-0:3.11 100% | 242.6 MiB/s | 1.9 MiB | 00m00s [ 64/283] Installing nss-sysinit-0:3.11 100% | 1.2 MiB/s | 19.2 KiB | 00m00s [ 65/283] Installing nss-0:3.116.0-1.fc 100% | 78.5 MiB/s | 1.9 MiB | 00m00s [ 66/283] Installing cuda-sandbox-devel 100% | 48.4 MiB/s | 148.6 KiB | 00m00s [ 67/283] Installing annobin-docs-0:12. 100% | 48.8 MiB/s | 100.0 KiB | 00m00s [ 68/283] Installing gdbm-1:1.23-9.fc42 100% | 22.7 MiB/s | 465.2 KiB | 00m00s [ 69/283] Installing cracklib-0:2.9.11- 100% | 10.8 MiB/s | 253.7 KiB | 00m00s [ 70/283] Installing libpwquality-0:1.4 100% | 17.9 MiB/s | 421.6 KiB | 00m00s [ 71/283] Installing authselect-libs-0: 100% | 82.0 MiB/s | 840.0 KiB | 00m00s [ 72/283] Installing hwdata-0:0.400-1.f 100% | 290.8 MiB/s | 9.6 MiB | 00m00s [ 73/283] Installing libpciaccess-0:0.1 100% | 14.9 MiB/s | 45.9 KiB | 00m00s [ 74/283] Installing libdrm-0:2.4.126-1 100% | 43.8 MiB/s | 403.7 KiB | 00m00s [ 75/283] Installing spirv-tools-libs-0 100% | 304.3 MiB/s | 5.8 MiB | 00m00s [ 76/283] Installing llvm-filesystem-0: 100% | 1.0 MiB/s | 1.1 KiB | 00m00s [ 77/283] Installing llvm-libs-0:20.1.8 100% | 293.5 MiB/s | 137.1 MiB | 00m00s [ 78/283] Installing libwayland-client- 100% | 30.9 MiB/s | 63.2 KiB | 00m00s [ 79/283] Installing mesa-filesystem-0: 100% | 4.2 MiB/s | 4.3 KiB | 00m00s [ 80/283] Installing lm_sensors-libs-0: 100% | 84.9 MiB/s | 86.9 KiB | 00m00s [ 81/283] Installing libxshmfence-0:1.3 100% | 13.2 MiB/s | 13.6 KiB | 00m00s [ 82/283] Installing OpenCL-ICD-Loader- 100% | 23.4 MiB/s | 71.8 KiB | 00m00s [ 83/283] Installing xkeyboard-config-0 100% | 158.8 MiB/s | 6.7 MiB | 00m00s [ 84/283] Installing libxkbcommon-0:1.8 100% | 120.1 MiB/s | 369.1 KiB | 00m00s [ 85/283] Installing libICE-0:1.1.2-2.f 100% | 195.1 MiB/s | 199.8 KiB | 00m00s [ 86/283] Installing libSM-0:1.2.5-2.fc 100% | 103.9 MiB/s | 106.4 KiB | 00m00s [ 87/283] Installing libzstd-devel-0:1. 100% | 203.9 MiB/s | 208.8 KiB | 00m00s [ 88/283] Installing elfutils-libelf-de 100% | 27.1 MiB/s | 55.5 KiB | 00m00s [ 89/283] Installing pixman-0:0.46.2-1. 100% | 231.6 MiB/s | 711.4 KiB | 00m00s [ 90/283] Installing nettle-0:3.10.1-1. 100% | 193.8 MiB/s | 793.6 KiB | 00m00s [ 91/283] Installing gnutls-0:3.8.10-1. 100% | 240.0 MiB/s | 3.8 MiB | 00m00s [ 92/283] Installing glib2-0:2.84.4-1.f 100% | 191.0 MiB/s | 14.7 MiB | 00m00s [ 93/283] Installing kernel-headers-0:6 100% | 103.5 MiB/s | 6.8 MiB | 00m00s [ 94/283] Installing libxcrypt-devel-0: 100% | 6.5 MiB/s | 33.1 KiB | 00m00s [ 95/283] Installing glibc-devel-0:2.41 100% | 70.7 MiB/s | 2.3 MiB | 00m00s [ 96/283] Installing gcc-0:15.2.1-1.fc4 100% | 276.1 MiB/s | 111.3 MiB | 00m00s [ 97/283] Installing libX11-common-0:1. 100% | 59.4 MiB/s | 1.2 MiB | 00m00s [ 98/283] Installing lksctp-tools-0:1.0 100% | 12.5 MiB/s | 255.4 KiB | 00m00s [ 99/283] Installing cups-filesystem-1: 100% | 1.7 MiB/s | 1.8 KiB | 00m00s [100/283] Installing cups-libs-1:2.4.14 100% | 151.4 MiB/s | 620.2 KiB | 00m00s [101/283] Installing python-pip-wheel-0 100% | 414.8 MiB/s | 1.2 MiB | 00m00s [102/283] Installing mpdecimal-0:4.0.1- 100% | 35.6 MiB/s | 218.8 KiB | 00m00s [103/283] Installing tzdata-0:2025b-1.f 100% | 24.6 MiB/s | 1.9 MiB | 00m00s [104/283] Installing libb2-0:0.98.1-13. 100% | 6.6 MiB/s | 47.2 KiB | 00m00s [105/283] Installing python3-libs-0:3.1 100% | 188.1 MiB/s | 40.4 MiB | 00m00s [106/283] Installing python3-0:3.13.7-1 100% | 1.7 MiB/s | 30.5 KiB | 00m00s [107/283] Installing cmake-rpm-macros-0 100% | 8.1 MiB/s | 8.3 KiB | 00m00s [108/283] Installing kmod-0:33-3.fc42.x 100% | 13.0 MiB/s | 239.9 KiB | 00m00s [109/283] Installing libfontenc-0:1.1.8 100% | 70.6 MiB/s | 72.3 KiB | 00m00s [110/283] Installing tzdata-java-0:2025 100% | 98.1 MiB/s | 100.5 KiB | 00m00s [111/283] Installing javapackages-files 100% | 1.8 MiB/s | 5.5 KiB | 00m00s [112/283] Installing java-21-openjdk-he 100% | 295.7 MiB/s | 197.8 MiB | 00m01s [113/283] Installing google-noto-fonts- 100% | 18.1 MiB/s | 18.5 KiB | 00m00s [114/283] Installing google-noto-sans-v 100% | 198.8 MiB/s | 1.4 MiB | 00m00s [115/283] Installing default-fonts-core 100% | 5.9 MiB/s | 18.2 KiB | 00m00s [116/283] Installing graphite2-0:1.3.14 100% | 11.4 MiB/s | 197.9 KiB | 00m00s [117/283] Installing harfbuzz-0:10.4.0- 100% | 229.1 MiB/s | 2.7 MiB | 00m00s [118/283] Installing freetype-0:2.13.3- 100% | 168.0 MiB/s | 859.9 KiB | 00m00s [119/283] Installing mkfontscale-0:1.2. 100% | 2.2 MiB/s | 46.4 KiB | 00m00s [120/283] Installing ttmkfdir-0:3.0.9-7 100% | 6.9 MiB/s | 119.6 KiB | 00m00s [121/283] Installing libXau-0:1.0.12-2. 100% | 38.3 MiB/s | 78.5 KiB | 00m00s [122/283] Installing libxcb-0:1.17.0-5. 100% | 120.0 MiB/s | 1.1 MiB | 00m00s [123/283] Installing libX11-0:1.8.12-1. 100% | 213.6 MiB/s | 1.3 MiB | 00m00s [124/283] Installing libXext-0:1.3.6-3. 100% | 44.5 MiB/s | 91.2 KiB | 00m00s [125/283] Installing libXrender-0:0.9.1 100% | 25.0 MiB/s | 51.3 KiB | 00m00s [126/283] Installing libXi-0:1.8.2-2.fc 100% | 83.7 MiB/s | 85.7 KiB | 00m00s [127/283] Installing libXtst-0:1.2.5-2. 100% | 33.8 MiB/s | 34.6 KiB | 00m00s [128/283] Installing libXcomposite-0:0. 100% | 22.5 MiB/s | 46.0 KiB | 00m00s [129/283] Installing mesa-dri-drivers-0 100% | 282.8 MiB/s | 46.7 MiB | 00m00s [130/283] Installing mesa-libgbm-0:25.1 100% | 20.0 MiB/s | 20.5 KiB | 00m00s [131/283] Installing libglvnd-egl-1:1.7 100% | 34.3 MiB/s | 70.3 KiB | 00m00s [132/283] Installing mesa-libEGL-0:25.1 100% | 164.0 MiB/s | 335.9 KiB | 00m00s [133/283] Installing libXrandr-0:1.5.4- 100% | 55.7 MiB/s | 57.0 KiB | 00m00s [134/283] Installing libXfixes-0:6.0.1- 100% | 30.8 MiB/s | 31.6 KiB | 00m00s [135/283] Installing libXdamage-0:1.1.6 100% | 22.1 MiB/s | 45.2 KiB | 00m00s [136/283] Installing libxkbcommon-x11-0 100% | 35.5 MiB/s | 36.4 KiB | 00m00s [137/283] Installing xcb-util-keysyms-0 100% | 17.4 MiB/s | 17.8 KiB | 00m00s [138/283] Installing xcb-util-renderuti 100% | 25.2 MiB/s | 25.8 KiB | 00m00s [139/283] Installing xcb-util-wm-0:0.4. 100% | 40.6 MiB/s | 83.2 KiB | 00m00s [140/283] Installing xcb-util-0:0.4.1-7 100% | 27.0 MiB/s | 27.7 KiB | 00m00s [141/283] Installing xcb-util-image-0:0 100% | 1.8 MiB/s | 23.6 KiB | 00m00s [142/283] Installing xml-common-0:0.6.3 100% | 26.4 MiB/s | 81.1 KiB | 00m00s [143/283] Installing fontconfig-0:2.16. 100% | 740.2 KiB/s | 783.9 KiB | 00m01s [144/283] Installing cairo-0:1.18.2-3.f 100% | 198.1 MiB/s | 1.8 MiB | 00m00s [145/283] Installing xorg-x11-fonts-Typ 100% | 812.8 KiB/s | 865.6 KiB | 00m01s [146/283] Installing java-21-openjdk-1: 100% | 49.9 MiB/s | 1.0 MiB | 00m00s [147/283] Installing cuda-nsight-12-9-0 100% | 362.8 MiB/s | 113.2 MiB | 00m00s [148/283] Installing cuda-nvvp-12-9-0:1 100% | 170.0 MiB/s | 128.0 MiB | 00m01s [149/283] Installing nsight-systems-202 100% | 263.4 MiB/s | 1.0 GiB | 00m04s [150/283] Installing cuda-nsight-system 100% | 149.8 KiB/s | 2.5 KiB | 00m00s [151/283] Installing nvidia-modprobe-3: 100% | 2.7 MiB/s | 55.5 KiB | 00m00s [152/283] Installing libnvidia-gpucomp- 100% | 289.6 MiB/s | 68.9 MiB | 00m00s [153/283] Installing nvidia-driver-cuda 100% | 338.4 MiB/s | 343.5 MiB | 00m01s [154/283] Installing opencl-filesystem- 100% | 61.8 KiB/s | 380.0 B | 00m00s [155/283] Installing openssl-1:3.2.6-2. 100% | 51.0 MiB/s | 1.7 MiB | 00m00s [156/283] Installing libuv-1:1.51.0-1.f 100% | 43.0 MiB/s | 573.0 KiB | 00m00s [157/283] Installing vim-filesystem-2:9 100% | 1.2 MiB/s | 4.7 KiB | 00m00s [158/283] Installing libstdc++-devel-0: 100% | 210.6 MiB/s | 16.2 MiB | 00m00s [159/283] Installing gcc-c++-0:15.2.1-1 100% | 277.5 MiB/s | 41.4 MiB | 00m00s [160/283] Installing libcbor-0:0.11.0-3 100% | 38.7 MiB/s | 79.2 KiB | 00m00s [161/283] Installing libfido2-0:1.15.0- 100% | 119.0 MiB/s | 243.6 KiB | 00m00s [162/283] Installing openssh-0:9.9p1-11 100% | 69.1 MiB/s | 1.4 MiB | 00m00s [163/283] Installing openssh-clients-0: 100% | 71.2 MiB/s | 2.7 MiB | 00m00s [164/283] Installing less-0:679-1.fc42. 100% | 21.0 MiB/s | 409.4 KiB | 00m00s [165/283] Installing git-core-0:2.51.0- 100% | 257.1 MiB/s | 23.7 MiB | 00m00s [166/283] Installing git-core-doc-0:2.5 100% | 192.4 MiB/s | 17.9 MiB | 00m00s [167/283] Installing go-filesystem-0:3. 100% | 19.1 KiB/s | 392.0 B | 00m00s >>> Running sysusers scriptlet: dbus-common-1:1.16.0-3.fc42.noarch >>> Finished sysusers scriptlet: dbus-common-1:1.16.0-3.fc42.noarch >>> Scriptlet output: >>> Creating group 'dbus' with GID 81. >>> Creating user 'dbus' (System Message Bus) with UID 81 and GID 81. >>> [168/283] Installing dbus-common-1:1.16 100% | 451.7 KiB/s | 13.6 KiB | 00m00s [169/283] Installing dbus-broker-0:36-6 100% | 11.5 MiB/s | 389.6 KiB | 00m00s [170/283] Installing dbus-1:1.16.0-3.fc 100% | 60.5 KiB/s | 124.0 B | 00m00s [171/283] Installing systemd-pam-0:257. 100% | 28.2 MiB/s | 1.1 MiB | 00m00s >>> Running sysusers scriptlet: systemd-0:257.9-2.fc42.x86_64 >>> Finished sysusers scriptlet: systemd-0:257.9-2.fc42.x86_64 >>> Scriptlet output: >>> Creating group 'systemd-journal' with GID 190. >>> >>> Running sysusers scriptlet: systemd-0:257.9-2.fc42.x86_64 >>> Finished sysusers scriptlet: systemd-0:257.9-2.fc42.x86_64 >>> Scriptlet output: >>> Creating group 'systemd-oom' with GID 999. >>> Creating user 'systemd-oom' (systemd Userspace OOM Killer) with UID 999 and >>> [172/283] Installing systemd-0:257.9-2. 100% | 42.3 MiB/s | 12.3 MiB | 00m00s >>> Running sysusers scriptlet: nvidia-persistenced-3:580.95.05-1.fc42.x86_64 >>> Finished sysusers scriptlet: nvidia-persistenced-3:580.95.05-1.fc42.x86_64 >>> Scriptlet output: >>> Creating group 'nvidia-persistenced' with GID 998. >>> Creating user 'nvidia-persistenced' (NVIDIA Persistence Daemon) with UID 998 >>> [173/283] Installing nvidia-persistence 100% | 1.4 MiB/s | 59.1 KiB | 00m00s [174/283] Installing dkms-0:3.2.2-1.fc4 100% | 5.2 MiB/s | 213.1 KiB | 00m00s >>> Running %post scriptlet: dkms-0:3.2.2-1.fc42.noarch >>> Finished %post scriptlet: dkms-0:3.2.2-1.fc42.noarch >>> Scriptlet output: >>> Created symlink '/etc/systemd/system/multi-user.target.wants/dkms.service' → >>> [175/283] Installing nvidia-kmod-common 100% | 362.3 MiB/s | 100.4 MiB | 00m00s >>> Running %post scriptlet: nvidia-kmod-common-3:580.95.05-1.fc42.noarch >>> Finished %post scriptlet: nvidia-kmod-common-3:580.95.05-1.fc42.noarch >>> Scriptlet output: >>> Nvidia driver setup: no bootloader configured. Please run 'nvidia-boot-updat >>> grep: /etc/kernel/cmdline: No such file or directory >>> grep: /etc/kernel/cmdline: No such file or directory >>> grep: /etc/kernel/cmdline: No such file or directory >>> grep: /etc/kernel/cmdline: No such file or directory >>> grep: /etc/kernel/cmdline: No such file or directory >>> grep: /etc/kernel/cmdline: No such file or directory >>> grep: /etc/kernel/cmdline: No such file or directory >>> grep: /etc/kernel/cmdline: No such file or directory >>> grep: /etc/kernel/cmdline: No such file or directory >>> [176/283] Installing kmod-nvidia-open-d 100% | 121.9 MiB/s | 120.0 MiB | 00m01s [177/283] Installing nvidia-driver-cuda 100% | 55.2 MiB/s | 1.4 MiB | 00m00s [178/283] Installing cuda-runtime-12-9- 100% | 121.1 KiB/s | 124.0 B | 00m00s [179/283] Installing ncurses-0:6.5-5.20 100% | 28.6 MiB/s | 614.7 KiB | 00m00s [180/283] Installing nsight-compute-202 100% | 167.8 MiB/s | 1.1 GiB | 00m07s [181/283] Installing cuda-nsight-comput 100% | 305.0 KiB/s | 7.9 KiB | 00m00s [182/283] Installing groff-base-0:1.23. 100% | 68.3 MiB/s | 3.9 MiB | 00m00s [183/283] Installing perl-Digest-0:1.20 100% | 18.1 MiB/s | 37.1 KiB | 00m00s [184/283] Installing perl-Digest-MD5-0: 100% | 20.0 MiB/s | 61.6 KiB | 00m00s [185/283] Installing perl-B-0:1.89-519. 100% | 122.4 MiB/s | 501.3 KiB | 00m00s [186/283] Installing perl-FileHandle-0: 100% | 4.8 MiB/s | 9.8 KiB | 00m00s [187/283] Installing perl-MIME-Base32-0 100% | 15.7 MiB/s | 32.2 KiB | 00m00s [188/283] Installing perl-Data-Dumper-0 100% | 57.4 MiB/s | 117.5 KiB | 00m00s [189/283] Installing perl-libnet-0:3.15 100% | 95.9 MiB/s | 294.7 KiB | 00m00s [190/283] Installing perl-AutoLoader-0: 100% | 20.5 MiB/s | 20.9 KiB | 00m00s [191/283] Installing perl-IO-Socket-IP- 100% | 49.9 MiB/s | 102.2 KiB | 00m00s [192/283] Installing perl-URI-0:5.31-2. 100% | 43.9 MiB/s | 269.6 KiB | 00m00s [193/283] Installing perl-Time-Local-2: 100% | 68.9 MiB/s | 70.6 KiB | 00m00s [194/283] Installing perl-Text-Tabs+Wra 100% | 11.7 MiB/s | 23.9 KiB | 00m00s [195/283] Installing perl-File-Path-0:2 100% | 63.0 MiB/s | 64.5 KiB | 00m00s [196/283] Installing perl-Pod-Escapes-1 100% | 25.3 MiB/s | 25.9 KiB | 00m00s [197/283] Installing perl-if-0:0.61.000 100% | 3.0 MiB/s | 6.2 KiB | 00m00s [198/283] Installing perl-Net-SSLeay-0: 100% | 135.9 MiB/s | 1.4 MiB | 00m00s [199/283] Installing perl-locale-0:1.12 100% | 6.7 MiB/s | 6.9 KiB | 00m00s [200/283] Installing perl-IO-Socket-SSL 100% | 138.2 MiB/s | 707.4 KiB | 00m00s [201/283] Installing perl-Term-ANSIColo 100% | 48.4 MiB/s | 99.2 KiB | 00m00s [202/283] Installing perl-Term-Cap-0:1. 100% | 29.9 MiB/s | 30.6 KiB | 00m00s [203/283] Installing perl-Pod-Simple-1: 100% | 111.4 MiB/s | 570.4 KiB | 00m00s [204/283] Installing perl-POSIX-0:2.20- 100% | 113.4 MiB/s | 232.3 KiB | 00m00s [205/283] Installing perl-File-Temp-1:0 100% | 80.1 MiB/s | 164.1 KiB | 00m00s [206/283] Installing perl-IPC-Open3-0:1 100% | 22.7 MiB/s | 23.3 KiB | 00m00s [207/283] Installing perl-HTTP-Tiny-0:0 100% | 76.4 MiB/s | 156.4 KiB | 00m00s [208/283] Installing perl-Class-Struct- 100% | 25.3 MiB/s | 25.9 KiB | 00m00s [209/283] Installing perl-Socket-4:2.03 100% | 59.6 MiB/s | 122.0 KiB | 00m00s [210/283] Installing perl-Symbol-0:1.09 100% | 7.0 MiB/s | 7.2 KiB | 00m00s [211/283] Installing perl-SelectSaver-0 100% | 2.5 MiB/s | 2.6 KiB | 00m00s [212/283] Installing perl-podlators-1:6 100% | 17.4 MiB/s | 321.4 KiB | 00m00s [213/283] Installing perl-Pod-Perldoc-0 100% | 9.7 MiB/s | 169.2 KiB | 00m00s [214/283] Installing perl-File-stat-0:1 100% | 12.7 MiB/s | 13.1 KiB | 00m00s [215/283] Installing perl-Text-ParseWor 100% | 14.2 MiB/s | 14.6 KiB | 00m00s [216/283] Installing perl-Fcntl-0:1.18- 100% | 48.8 MiB/s | 50.0 KiB | 00m00s [217/283] Installing perl-base-0:2.27-5 100% | 12.6 MiB/s | 12.9 KiB | 00m00s [218/283] Installing perl-mro-0:1.29-51 100% | 41.6 MiB/s | 42.6 KiB | 00m00s [219/283] Installing perl-overloading-0 100% | 5.4 MiB/s | 5.5 KiB | 00m00s [220/283] Installing perl-Pod-Usage-4:2 100% | 5.7 MiB/s | 87.9 KiB | 00m00s [221/283] Installing perl-IO-0:1.55-519 100% | 49.2 MiB/s | 151.3 KiB | 00m00s [222/283] Installing perl-constant-0:1. 100% | 26.7 MiB/s | 27.4 KiB | 00m00s [223/283] Installing perl-parent-1:0.24 100% | 10.7 MiB/s | 11.0 KiB | 00m00s [224/283] Installing perl-MIME-Base64-0 100% | 21.6 MiB/s | 44.3 KiB | 00m00s [225/283] Installing perl-Errno-0:1.38- 100% | 0.0 B/s | 8.7 KiB | 00m00s [226/283] Installing perl-File-Basename 100% | 14.2 MiB/s | 14.6 KiB | 00m00s [227/283] Installing perl-Scalar-List-U 100% | 48.4 MiB/s | 148.6 KiB | 00m00s [228/283] Installing perl-vars-0:1.05-5 100% | 4.2 MiB/s | 4.3 KiB | 00m00s [229/283] Installing perl-Getopt-Std-0: 100% | 0.0 B/s | 11.7 KiB | 00m00s [230/283] Installing perl-overload-0:1. 100% | 70.3 MiB/s | 71.9 KiB | 00m00s [231/283] Installing perl-Storable-1:3. 100% | 76.1 MiB/s | 233.9 KiB | 00m00s [232/283] Installing perl-Getopt-Long-1 100% | 71.9 MiB/s | 147.2 KiB | 00m00s [233/283] Installing perl-Exporter-0:5. 100% | 54.3 MiB/s | 55.6 KiB | 00m00s [234/283] Installing perl-Carp-0:1.54-5 100% | 46.6 MiB/s | 47.7 KiB | 00m00s [235/283] Installing perl-PathTools-0:3 100% | 60.1 MiB/s | 184.5 KiB | 00m00s [236/283] Installing perl-DynaLoader-0: 100% | 31.7 MiB/s | 32.5 KiB | 00m00s [237/283] Installing perl-Encode-4:3.21 100% | 134.1 MiB/s | 4.7 MiB | 00m00s [238/283] Installing perl-libs-4:5.40.3 100% | 152.2 MiB/s | 9.9 MiB | 00m00s [239/283] Installing perl-interpreter-4 100% | 7.3 MiB/s | 120.1 KiB | 00m00s [240/283] Installing perl-TermReadKey-0 100% | 32.3 MiB/s | 66.2 KiB | 00m00s [241/283] Installing perl-Error-1:0.170 100% | 39.0 MiB/s | 80.0 KiB | 00m00s [242/283] Installing perl-lib-0:0.65-51 100% | 8.7 MiB/s | 8.9 KiB | 00m00s [243/283] Installing perl-Git-0:2.51.0- 100% | 63.8 MiB/s | 65.4 KiB | 00m00s [244/283] Installing git-0:2.51.0-2.fc4 100% | 28.2 MiB/s | 57.7 KiB | 00m00s [245/283] Installing numactl-libs-0:2.0 100% | 52.5 MiB/s | 53.8 KiB | 00m00s [246/283] Installing gds-tools-12-9-0:1 100% | 152.9 MiB/s | 59.9 MiB | 00m00s [247/283] Installing cuda-nvtx-12-9-0:1 100% | 102.7 MiB/s | 420.5 KiB | 00m00s [248/283] Installing cuda-gdb-12-9-0:12 100% | 157.2 MiB/s | 89.7 MiB | 00m01s [249/283] Installing cuda-cupti-12-9-0: 100% | 202.6 MiB/s | 143.3 MiB | 00m01s [250/283] Installing cuda-nvvm-12-9-0:1 100% | 145.4 MiB/s | 132.7 MiB | 00m01s [251/283] Installing cuda-crt-12-9-0:12 100% | 182.4 MiB/s | 933.9 KiB | 00m00s [252/283] Installing cuda-nvcc-12-9-0:1 100% | 203.0 MiB/s | 317.8 MiB | 00m02s [253/283] Installing cuda-profiler-api- 100% | 73.1 MiB/s | 74.9 KiB | 00m00s [254/283] Installing cuda-driver-devel- 100% | 129.7 MiB/s | 132.8 KiB | 00m00s [255/283] Installing cuda-libraries-dev 100% | 0.0 B/s | 124.0 B | 00m00s [256/283] Installing cuda-visual-tools- 100% | 0.0 B/s | 124.0 B | 00m00s [257/283] Installing cuda-nvprune-12-9- 100% | 88.8 MiB/s | 181.8 KiB | 00m00s [258/283] Installing cuda-cuxxfilt-12-9 100% | 174.1 MiB/s | 1.0 MiB | 00m00s [259/283] Installing cuda-cuobjdump-12- 100% | 108.5 MiB/s | 666.6 KiB | 00m00s [260/283] Installing cuda-compiler-12-9 100% | 0.0 B/s | 124.0 B | 00m00s [261/283] Installing cuda-documentation 100% | 131.8 MiB/s | 539.9 KiB | 00m00s [262/283] Installing emacs-filesystem-1 100% | 40.9 KiB/s | 544.0 B | 00m00s [263/283] Installing golang-src-0:1.24. 100% | 152.7 MiB/s | 80.2 MiB | 00m01s [264/283] Installing golang-bin-0:1.24. 100% | 319.5 MiB/s | 122.1 MiB | 00m00s [265/283] Installing golang-0:1.24.8-1. 100% | 358.0 MiB/s | 9.0 MiB | 00m00s [266/283] Installing cuda-demo-suite-12 100% | 158.0 MiB/s | 13.9 MiB | 00m00s [267/283] Installing rhash-0:1.4.5-2.fc 100% | 17.4 MiB/s | 356.4 KiB | 00m00s [268/283] Installing jsoncpp-0:1.9.6-1. 100% | 18.4 MiB/s | 263.1 KiB | 00m00s [269/283] Installing cmake-data-0:3.31. 100% | 54.0 MiB/s | 9.1 MiB | 00m00s [270/283] Installing cmake-0:3.31.6-2.f 100% | 265.3 MiB/s | 34.2 MiB | 00m00s [271/283] Installing hiredis-0:1.2.0-6. 100% | 35.0 MiB/s | 107.6 KiB | 00m00s [272/283] Installing fmt-0:11.1.4-1.fc4 100% | 129.6 MiB/s | 265.4 KiB | 00m00s [273/283] Installing cuda-sanitizer-12- 100% | 157.7 MiB/s | 37.4 MiB | 00m00s [274/283] Installing cuda-command-line- 100% | 121.1 KiB/s | 124.0 B | 00m00s [275/283] Installing cuda-tools-12-9-0: 100% | 0.0 B/s | 124.0 B | 00m00s [276/283] Installing cuda-toolkit-12-9- 100% | 0.0 B/s | 3.6 KiB | 00m00s [277/283] Installing cuda-12-9-0:12.9.1 100% | 5.3 KiB/s | 124.0 B | 00m00s [278/283] Installing ccache-0:4.10.2-2. 100% | 24.2 MiB/s | 1.5 MiB | 00m00s [279/283] Installing gcc-plugin-annobin 100% | 2.2 MiB/s | 58.6 KiB | 00m00s [280/283] Installing annobin-plugin-gcc 100% | 31.3 MiB/s | 995.1 KiB | 00m00s [281/283] Installing authselect-0:1.5.1 100% | 7.4 MiB/s | 158.2 KiB | 00m00s [282/283] Installing pam-0:1.7.0-6.fc42 100% | 42.5 MiB/s | 1.7 MiB | 00m00s [283/283] Installing systemd-rpm-macros 100% | 18.3 KiB/s | 11.3 KiB | 00m01s Warning: skipped OpenPGP checks for 73 packages from repositories: https_developer_download_nvidia_com_compute_cuda_repos_fedora41_x86_64, https_developer_download_nvidia_com_compute_cuda_repos_fedora42_x86_64 Complete! Finish: build setup for ollama-0.12.5-1.fc42.src.rpm Start: rpmbuild ollama-0.12.5-1.fc42.src.rpm Building target platforms: x86_64 Building for target x86_64 setting SOURCE_DATE_EPOCH=1760486400 Executing(%mkbuilddir): /bin/sh -e /var/tmp/rpm-tmp.cxOSRL Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.aIu2EL + umask 022 + cd /builddir/build/BUILD/ollama-0.12.5-build + cd /builddir/build/BUILD/ollama-0.12.5-build + rm -rf ollama-0.12.5 + /usr/lib/rpm/rpmuncompress -x -v /builddir/build/SOURCES/v0.12.5.zip TZ=UTC /usr/bin/unzip -u '/builddir/build/SOURCES/v0.12.5.zip' Archive: /builddir/build/SOURCES/v0.12.5.zip 3d32249c749c6f77c1dc8a7cb55ae74fc2f4c08b creating: ollama-0.12.5/ inflating: ollama-0.12.5/.dockerignore inflating: ollama-0.12.5/.gitattributes creating: ollama-0.12.5/.github/ creating: ollama-0.12.5/.github/ISSUE_TEMPLATE/ inflating: ollama-0.12.5/.github/ISSUE_TEMPLATE/10_bug_report.yml inflating: ollama-0.12.5/.github/ISSUE_TEMPLATE/20_feature_request.md inflating: ollama-0.12.5/.github/ISSUE_TEMPLATE/30_model_request.md inflating: ollama-0.12.5/.github/ISSUE_TEMPLATE/config.yml creating: ollama-0.12.5/.github/workflows/ inflating: ollama-0.12.5/.github/workflows/latest.yaml inflating: ollama-0.12.5/.github/workflows/release.yaml inflating: ollama-0.12.5/.github/workflows/test.yaml inflating: ollama-0.12.5/.gitignore inflating: ollama-0.12.5/.golangci.yaml inflating: ollama-0.12.5/CMakeLists.txt inflating: ollama-0.12.5/CMakePresets.json inflating: ollama-0.12.5/CONTRIBUTING.md inflating: ollama-0.12.5/Dockerfile inflating: ollama-0.12.5/LICENSE inflating: ollama-0.12.5/Makefile.sync inflating: ollama-0.12.5/README.md inflating: ollama-0.12.5/SECURITY.md creating: ollama-0.12.5/api/ inflating: ollama-0.12.5/api/client.go inflating: ollama-0.12.5/api/client_test.go creating: ollama-0.12.5/api/examples/ inflating: ollama-0.12.5/api/examples/README.md creating: ollama-0.12.5/api/examples/chat/ inflating: ollama-0.12.5/api/examples/chat/main.go creating: ollama-0.12.5/api/examples/generate-streaming/ inflating: ollama-0.12.5/api/examples/generate-streaming/main.go creating: ollama-0.12.5/api/examples/generate/ inflating: ollama-0.12.5/api/examples/generate/main.go creating: ollama-0.12.5/api/examples/multimodal/ inflating: ollama-0.12.5/api/examples/multimodal/main.go creating: ollama-0.12.5/api/examples/pull-progress/ inflating: ollama-0.12.5/api/examples/pull-progress/main.go inflating: ollama-0.12.5/api/types.go inflating: ollama-0.12.5/api/types_test.go inflating: ollama-0.12.5/api/types_typescript_test.go creating: ollama-0.12.5/app/ extracting: ollama-0.12.5/app/.gitignore inflating: ollama-0.12.5/app/README.md creating: ollama-0.12.5/app/assets/ inflating: ollama-0.12.5/app/assets/app.ico inflating: ollama-0.12.5/app/assets/assets.go inflating: ollama-0.12.5/app/assets/setup.bmp inflating: ollama-0.12.5/app/assets/tray.ico inflating: ollama-0.12.5/app/assets/tray_upgrade.ico creating: ollama-0.12.5/app/lifecycle/ inflating: ollama-0.12.5/app/lifecycle/getstarted_nonwindows.go inflating: ollama-0.12.5/app/lifecycle/getstarted_windows.go inflating: ollama-0.12.5/app/lifecycle/lifecycle.go inflating: ollama-0.12.5/app/lifecycle/logging.go inflating: ollama-0.12.5/app/lifecycle/logging_nonwindows.go inflating: ollama-0.12.5/app/lifecycle/logging_test.go inflating: ollama-0.12.5/app/lifecycle/logging_windows.go inflating: ollama-0.12.5/app/lifecycle/paths.go inflating: ollama-0.12.5/app/lifecycle/server.go inflating: ollama-0.12.5/app/lifecycle/server_unix.go inflating: ollama-0.12.5/app/lifecycle/server_windows.go inflating: ollama-0.12.5/app/lifecycle/updater.go inflating: ollama-0.12.5/app/lifecycle/updater_nonwindows.go inflating: ollama-0.12.5/app/lifecycle/updater_windows.go inflating: ollama-0.12.5/app/main.go inflating: ollama-0.12.5/app/ollama.iss inflating: ollama-0.12.5/app/ollama.rc inflating: ollama-0.12.5/app/ollama_welcome.ps1 creating: ollama-0.12.5/app/store/ inflating: ollama-0.12.5/app/store/store.go inflating: ollama-0.12.5/app/store/store_darwin.go inflating: ollama-0.12.5/app/store/store_linux.go inflating: ollama-0.12.5/app/store/store_windows.go creating: ollama-0.12.5/app/tray/ creating: ollama-0.12.5/app/tray/commontray/ inflating: ollama-0.12.5/app/tray/commontray/types.go inflating: ollama-0.12.5/app/tray/tray.go inflating: ollama-0.12.5/app/tray/tray_nonwindows.go inflating: ollama-0.12.5/app/tray/tray_windows.go creating: ollama-0.12.5/app/tray/wintray/ inflating: ollama-0.12.5/app/tray/wintray/eventloop.go inflating: ollama-0.12.5/app/tray/wintray/menus.go inflating: ollama-0.12.5/app/tray/wintray/messages.go inflating: ollama-0.12.5/app/tray/wintray/notifyicon.go inflating: ollama-0.12.5/app/tray/wintray/tray.go inflating: ollama-0.12.5/app/tray/wintray/w32api.go inflating: ollama-0.12.5/app/tray/wintray/winclass.go creating: ollama-0.12.5/auth/ inflating: ollama-0.12.5/auth/auth.go creating: ollama-0.12.5/cmd/ inflating: ollama-0.12.5/cmd/cmd.go inflating: ollama-0.12.5/cmd/cmd_test.go inflating: ollama-0.12.5/cmd/interactive.go inflating: ollama-0.12.5/cmd/interactive_test.go creating: ollama-0.12.5/cmd/runner/ inflating: ollama-0.12.5/cmd/runner/main.go inflating: ollama-0.12.5/cmd/start.go inflating: ollama-0.12.5/cmd/start_darwin.go inflating: ollama-0.12.5/cmd/start_default.go inflating: ollama-0.12.5/cmd/start_windows.go inflating: ollama-0.12.5/cmd/warn_thinking_test.go creating: ollama-0.12.5/convert/ inflating: ollama-0.12.5/convert/convert.go inflating: ollama-0.12.5/convert/convert_bert.go inflating: ollama-0.12.5/convert/convert_commandr.go inflating: ollama-0.12.5/convert/convert_gemma.go inflating: ollama-0.12.5/convert/convert_gemma2.go inflating: ollama-0.12.5/convert/convert_gemma2_adapter.go inflating: ollama-0.12.5/convert/convert_gemma3.go inflating: ollama-0.12.5/convert/convert_gemma3n.go inflating: ollama-0.12.5/convert/convert_gptoss.go inflating: ollama-0.12.5/convert/convert_llama.go inflating: ollama-0.12.5/convert/convert_llama4.go inflating: ollama-0.12.5/convert/convert_llama_adapter.go inflating: ollama-0.12.5/convert/convert_mistral.go inflating: ollama-0.12.5/convert/convert_mixtral.go inflating: ollama-0.12.5/convert/convert_mllama.go inflating: ollama-0.12.5/convert/convert_phi3.go inflating: ollama-0.12.5/convert/convert_qwen2.go inflating: ollama-0.12.5/convert/convert_qwen25vl.go inflating: ollama-0.12.5/convert/convert_test.go inflating: ollama-0.12.5/convert/reader.go inflating: ollama-0.12.5/convert/reader_safetensors.go inflating: ollama-0.12.5/convert/reader_test.go inflating: ollama-0.12.5/convert/reader_torch.go creating: ollama-0.12.5/convert/sentencepiece/ inflating: ollama-0.12.5/convert/sentencepiece/sentencepiece_model.pb.go inflating: ollama-0.12.5/convert/sentencepiece_model.proto inflating: ollama-0.12.5/convert/tensor.go inflating: ollama-0.12.5/convert/tensor_test.go creating: ollama-0.12.5/convert/testdata/ inflating: ollama-0.12.5/convert/testdata/Meta-Llama-3-8B-Instruct.json inflating: ollama-0.12.5/convert/testdata/Meta-Llama-3.1-8B-Instruct.json inflating: ollama-0.12.5/convert/testdata/Mistral-7B-Instruct-v0.2.json inflating: ollama-0.12.5/convert/testdata/Mixtral-8x7B-Instruct-v0.1.json inflating: ollama-0.12.5/convert/testdata/Phi-3-mini-128k-instruct.json inflating: ollama-0.12.5/convert/testdata/Qwen2.5-0.5B-Instruct.json inflating: ollama-0.12.5/convert/testdata/all-MiniLM-L6-v2.json inflating: ollama-0.12.5/convert/testdata/c4ai-command-r-v01.json inflating: ollama-0.12.5/convert/testdata/gemma-2-2b-it.json inflating: ollama-0.12.5/convert/testdata/gemma-2-9b-it.json inflating: ollama-0.12.5/convert/testdata/gemma-2b-it.json inflating: ollama-0.12.5/convert/tokenizer.go inflating: ollama-0.12.5/convert/tokenizer_spm.go inflating: ollama-0.12.5/convert/tokenizer_test.go creating: ollama-0.12.5/discover/ inflating: ollama-0.12.5/discover/cpu_linux.go inflating: ollama-0.12.5/discover/cpu_linux_test.go inflating: ollama-0.12.5/discover/cpu_windows.go inflating: ollama-0.12.5/discover/cpu_windows_test.go inflating: ollama-0.12.5/discover/gpu.go inflating: ollama-0.12.5/discover/gpu_darwin.go inflating: ollama-0.12.5/discover/gpu_info_darwin.h inflating: ollama-0.12.5/discover/gpu_info_darwin.m inflating: ollama-0.12.5/discover/path.go inflating: ollama-0.12.5/discover/runner.go inflating: ollama-0.12.5/discover/runner_test.go inflating: ollama-0.12.5/discover/types.go creating: ollama-0.12.5/docs/ inflating: ollama-0.12.5/docs/README.md inflating: ollama-0.12.5/docs/api.md inflating: ollama-0.12.5/docs/cloud.md inflating: ollama-0.12.5/docs/development.md inflating: ollama-0.12.5/docs/docker.md inflating: ollama-0.12.5/docs/examples.md inflating: ollama-0.12.5/docs/faq.md inflating: ollama-0.12.5/docs/gpu.md creating: ollama-0.12.5/docs/images/ inflating: ollama-0.12.5/docs/images/ollama-keys.png inflating: ollama-0.12.5/docs/images/signup.png inflating: ollama-0.12.5/docs/import.md inflating: ollama-0.12.5/docs/linux.md inflating: ollama-0.12.5/docs/macos.md inflating: ollama-0.12.5/docs/modelfile.md inflating: ollama-0.12.5/docs/openai.md inflating: ollama-0.12.5/docs/template.md inflating: ollama-0.12.5/docs/troubleshooting.md inflating: ollama-0.12.5/docs/windows.md creating: ollama-0.12.5/envconfig/ inflating: ollama-0.12.5/envconfig/config.go inflating: ollama-0.12.5/envconfig/config_test.go creating: ollama-0.12.5/format/ inflating: ollama-0.12.5/format/bytes.go inflating: ollama-0.12.5/format/bytes_test.go inflating: ollama-0.12.5/format/format.go inflating: ollama-0.12.5/format/format_test.go inflating: ollama-0.12.5/format/time.go inflating: ollama-0.12.5/format/time_test.go creating: ollama-0.12.5/fs/ inflating: ollama-0.12.5/fs/config.go creating: ollama-0.12.5/fs/ggml/ inflating: ollama-0.12.5/fs/ggml/ggml.go inflating: ollama-0.12.5/fs/ggml/ggml_test.go inflating: ollama-0.12.5/fs/ggml/gguf.go inflating: ollama-0.12.5/fs/ggml/gguf_test.go inflating: ollama-0.12.5/fs/ggml/type.go creating: ollama-0.12.5/fs/gguf/ inflating: ollama-0.12.5/fs/gguf/gguf.go inflating: ollama-0.12.5/fs/gguf/gguf_test.go inflating: ollama-0.12.5/fs/gguf/keyvalue.go inflating: ollama-0.12.5/fs/gguf/keyvalue_test.go inflating: ollama-0.12.5/fs/gguf/lazy.go inflating: ollama-0.12.5/fs/gguf/reader.go inflating: ollama-0.12.5/fs/gguf/tensor.go creating: ollama-0.12.5/fs/util/ creating: ollama-0.12.5/fs/util/bufioutil/ inflating: ollama-0.12.5/fs/util/bufioutil/buffer_seeker.go inflating: ollama-0.12.5/fs/util/bufioutil/buffer_seeker_test.go inflating: ollama-0.12.5/go.mod inflating: ollama-0.12.5/go.sum creating: ollama-0.12.5/harmony/ inflating: ollama-0.12.5/harmony/harmonyparser.go inflating: ollama-0.12.5/harmony/harmonyparser_test.go creating: ollama-0.12.5/integration/ inflating: ollama-0.12.5/integration/README.md inflating: ollama-0.12.5/integration/api_test.go inflating: ollama-0.12.5/integration/basic_test.go inflating: ollama-0.12.5/integration/concurrency_test.go inflating: ollama-0.12.5/integration/context_test.go inflating: ollama-0.12.5/integration/embed_test.go inflating: ollama-0.12.5/integration/library_models_test.go inflating: ollama-0.12.5/integration/llm_image_test.go inflating: ollama-0.12.5/integration/max_queue_test.go inflating: ollama-0.12.5/integration/model_arch_test.go inflating: ollama-0.12.5/integration/model_perf_test.go inflating: ollama-0.12.5/integration/quantization_test.go creating: ollama-0.12.5/integration/testdata/ inflating: ollama-0.12.5/integration/testdata/embed.json inflating: ollama-0.12.5/integration/testdata/shakespeare.txt inflating: ollama-0.12.5/integration/utils_test.go creating: ollama-0.12.5/kvcache/ inflating: ollama-0.12.5/kvcache/cache.go inflating: ollama-0.12.5/kvcache/causal.go inflating: ollama-0.12.5/kvcache/causal_test.go inflating: ollama-0.12.5/kvcache/encoder.go inflating: ollama-0.12.5/kvcache/wrapper.go creating: ollama-0.12.5/llama/ extracting: ollama-0.12.5/llama/.gitignore inflating: ollama-0.12.5/llama/README.md inflating: ollama-0.12.5/llama/build-info.cpp inflating: ollama-0.12.5/llama/build-info.cpp.in creating: ollama-0.12.5/llama/llama.cpp/ inflating: ollama-0.12.5/llama/llama.cpp/.rsync-filter inflating: ollama-0.12.5/llama/llama.cpp/LICENSE creating: ollama-0.12.5/llama/llama.cpp/common/ inflating: ollama-0.12.5/llama/llama.cpp/common/base64.hpp inflating: ollama-0.12.5/llama/llama.cpp/common/common.cpp inflating: ollama-0.12.5/llama/llama.cpp/common/common.go inflating: ollama-0.12.5/llama/llama.cpp/common/common.h inflating: ollama-0.12.5/llama/llama.cpp/common/json-schema-to-grammar.cpp inflating: ollama-0.12.5/llama/llama.cpp/common/json-schema-to-grammar.h inflating: ollama-0.12.5/llama/llama.cpp/common/log.cpp inflating: ollama-0.12.5/llama/llama.cpp/common/log.h inflating: ollama-0.12.5/llama/llama.cpp/common/sampling.cpp inflating: ollama-0.12.5/llama/llama.cpp/common/sampling.h creating: ollama-0.12.5/llama/llama.cpp/include/ inflating: ollama-0.12.5/llama/llama.cpp/include/llama-cpp.h inflating: ollama-0.12.5/llama/llama.cpp/include/llama.h creating: ollama-0.12.5/llama/llama.cpp/src/ inflating: ollama-0.12.5/llama/llama.cpp/src/llama-adapter.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-adapter.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-arch.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-arch.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-batch.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-batch.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-chat.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-chat.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-context.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-context.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-cparams.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-cparams.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-grammar.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-grammar.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-graph.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-graph.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-hparams.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-hparams.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-impl.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-impl.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-io.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-io.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-kv-cache-iswa.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-kv-cache-iswa.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-kv-cache.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-kv-cache.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-kv-cells.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-memory-hybrid.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-memory-hybrid.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-memory-recurrent.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-memory-recurrent.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-memory.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-memory.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-mmap.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-mmap.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-model-loader.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-model-loader.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-model-saver.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-model-saver.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-model.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-model.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-quant.cpp extracting: ollama-0.12.5/llama/llama.cpp/src/llama-quant.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-sampling.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-sampling.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama-vocab.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama-vocab.h inflating: ollama-0.12.5/llama/llama.cpp/src/llama.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/llama.go inflating: ollama-0.12.5/llama/llama.cpp/src/unicode-data.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/unicode-data.h inflating: ollama-0.12.5/llama/llama.cpp/src/unicode.cpp inflating: ollama-0.12.5/llama/llama.cpp/src/unicode.h creating: ollama-0.12.5/llama/llama.cpp/tools/ creating: ollama-0.12.5/llama/llama.cpp/tools/mtmd/ inflating: ollama-0.12.5/llama/llama.cpp/tools/mtmd/clip-impl.h inflating: ollama-0.12.5/llama/llama.cpp/tools/mtmd/clip.cpp inflating: ollama-0.12.5/llama/llama.cpp/tools/mtmd/clip.h inflating: ollama-0.12.5/llama/llama.cpp/tools/mtmd/mtmd-audio.cpp inflating: ollama-0.12.5/llama/llama.cpp/tools/mtmd/mtmd-audio.h inflating: ollama-0.12.5/llama/llama.cpp/tools/mtmd/mtmd-helper.cpp inflating: ollama-0.12.5/llama/llama.cpp/tools/mtmd/mtmd-helper.h inflating: ollama-0.12.5/llama/llama.cpp/tools/mtmd/mtmd.cpp inflating: ollama-0.12.5/llama/llama.cpp/tools/mtmd/mtmd.go inflating: ollama-0.12.5/llama/llama.cpp/tools/mtmd/mtmd.h creating: ollama-0.12.5/llama/llama.cpp/vendor/ creating: ollama-0.12.5/llama/llama.cpp/vendor/miniaudio/ inflating: ollama-0.12.5/llama/llama.cpp/vendor/miniaudio/miniaudio.h creating: ollama-0.12.5/llama/llama.cpp/vendor/nlohmann/ inflating: ollama-0.12.5/llama/llama.cpp/vendor/nlohmann/json.hpp inflating: ollama-0.12.5/llama/llama.cpp/vendor/nlohmann/json_fwd.hpp creating: ollama-0.12.5/llama/llama.cpp/vendor/stb/ inflating: ollama-0.12.5/llama/llama.cpp/vendor/stb/stb_image.h inflating: ollama-0.12.5/llama/llama.go inflating: ollama-0.12.5/llama/llama_test.go creating: ollama-0.12.5/llama/patches/ extracting: ollama-0.12.5/llama/patches/.gitignore inflating: ollama-0.12.5/llama/patches/0001-ggml-backend-malloc-and-free-using-the-same-compiler.patch inflating: ollama-0.12.5/llama/patches/0002-pretokenizer.patch inflating: ollama-0.12.5/llama/patches/0003-clip-unicode.patch inflating: ollama-0.12.5/llama/patches/0004-solar-pro.patch inflating: ollama-0.12.5/llama/patches/0005-fix-deepseek-deseret-regex.patch inflating: ollama-0.12.5/llama/patches/0006-maintain-ordering-for-rules-for-grammar.patch inflating: ollama-0.12.5/llama/patches/0007-sort-devices-by-score.patch inflating: ollama-0.12.5/llama/patches/0008-add-phony-target-ggml-cpu-for-all-cpu-variants.patch inflating: ollama-0.12.5/llama/patches/0009-remove-amx.patch inflating: ollama-0.12.5/llama/patches/0010-fix-string-arr-kv-loading.patch inflating: ollama-0.12.5/llama/patches/0011-ollama-debug-tensor.patch inflating: ollama-0.12.5/llama/patches/0012-add-ollama-vocab-for-grammar-support.patch inflating: ollama-0.12.5/llama/patches/0013-add-argsort-and-cuda-copy-for-i32.patch inflating: ollama-0.12.5/llama/patches/0014-graph-memory-reporting-on-failure.patch inflating: ollama-0.12.5/llama/patches/0015-ggml-Export-GPU-UUIDs.patch inflating: ollama-0.12.5/llama/patches/0016-add-C-API-for-mtmd_input_text.patch inflating: ollama-0.12.5/llama/patches/0017-no-power-throttling-win32-with-gnuc.patch inflating: ollama-0.12.5/llama/patches/0018-BF16-macos-version-guard.patch inflating: ollama-0.12.5/llama/patches/0019-Enable-CUDA-Graphs-for-gemma3n.patch inflating: ollama-0.12.5/llama/patches/0020-Disable-ggml-blas-on-macos-v13-and-older.patch inflating: ollama-0.12.5/llama/patches/0021-fix-mtmd-audio.cpp-build-on-windows.patch inflating: ollama-0.12.5/llama/patches/0022-ggml-No-alloc-mode.patch inflating: ollama-0.12.5/llama/patches/0023-decode-disable-output_all.patch inflating: ollama-0.12.5/llama/patches/0024-ggml-Enable-resetting-backend-devices.patch inflating: ollama-0.12.5/llama/patches/0025-harden-uncaught-exception-registration.patch inflating: ollama-0.12.5/llama/patches/0026-GPU-discovery-enhancements.patch inflating: ollama-0.12.5/llama/sampling_ext.cpp inflating: ollama-0.12.5/llama/sampling_ext.h creating: ollama-0.12.5/llm/ inflating: ollama-0.12.5/llm/llm_darwin.go inflating: ollama-0.12.5/llm/llm_linux.go inflating: ollama-0.12.5/llm/llm_windows.go inflating: ollama-0.12.5/llm/memory.go inflating: ollama-0.12.5/llm/memory_test.go inflating: ollama-0.12.5/llm/server.go inflating: ollama-0.12.5/llm/server_test.go inflating: ollama-0.12.5/llm/status.go creating: ollama-0.12.5/logutil/ inflating: ollama-0.12.5/logutil/logutil.go creating: ollama-0.12.5/macapp/ inflating: ollama-0.12.5/macapp/.eslintrc.json inflating: ollama-0.12.5/macapp/.gitignore inflating: ollama-0.12.5/macapp/README.md creating: ollama-0.12.5/macapp/assets/ inflating: ollama-0.12.5/macapp/assets/icon.icns extracting: ollama-0.12.5/macapp/assets/iconDarkTemplate.png extracting: ollama-0.12.5/macapp/assets/iconDarkTemplate@2x.png extracting: ollama-0.12.5/macapp/assets/iconDarkUpdateTemplate.png extracting: ollama-0.12.5/macapp/assets/iconDarkUpdateTemplate@2x.png extracting: ollama-0.12.5/macapp/assets/iconTemplate.png extracting: ollama-0.12.5/macapp/assets/iconTemplate@2x.png extracting: ollama-0.12.5/macapp/assets/iconUpdateTemplate.png extracting: ollama-0.12.5/macapp/assets/iconUpdateTemplate@2x.png inflating: ollama-0.12.5/macapp/forge.config.ts inflating: ollama-0.12.5/macapp/package-lock.json inflating: ollama-0.12.5/macapp/package.json inflating: ollama-0.12.5/macapp/postcss.config.js creating: ollama-0.12.5/macapp/src/ inflating: ollama-0.12.5/macapp/src/app.css inflating: ollama-0.12.5/macapp/src/app.tsx inflating: ollama-0.12.5/macapp/src/declarations.d.ts inflating: ollama-0.12.5/macapp/src/index.html inflating: ollama-0.12.5/macapp/src/index.ts inflating: ollama-0.12.5/macapp/src/install.ts inflating: ollama-0.12.5/macapp/src/ollama.svg extracting: ollama-0.12.5/macapp/src/preload.ts inflating: ollama-0.12.5/macapp/src/renderer.tsx inflating: ollama-0.12.5/macapp/tailwind.config.js inflating: ollama-0.12.5/macapp/tsconfig.json inflating: ollama-0.12.5/macapp/webpack.main.config.ts inflating: ollama-0.12.5/macapp/webpack.plugins.ts inflating: ollama-0.12.5/macapp/webpack.renderer.config.ts inflating: ollama-0.12.5/macapp/webpack.rules.ts inflating: ollama-0.12.5/main.go creating: ollama-0.12.5/middleware/ inflating: ollama-0.12.5/middleware/openai.go inflating: ollama-0.12.5/middleware/openai_test.go creating: ollama-0.12.5/ml/ inflating: ollama-0.12.5/ml/backend.go creating: ollama-0.12.5/ml/backend/ inflating: ollama-0.12.5/ml/backend/backend.go creating: ollama-0.12.5/ml/backend/ggml/ inflating: ollama-0.12.5/ml/backend/ggml/ggml.go creating: ollama-0.12.5/ml/backend/ggml/ggml/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/.rsync-filter inflating: ollama-0.12.5/ml/backend/ggml/ggml/LICENSE creating: ollama-0.12.5/ml/backend/ggml/ggml/cmake/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/cmake/common.cmake creating: ollama-0.12.5/ml/backend/ggml/ggml/include/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-alloc.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-backend.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-blas.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-cann.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-cpp.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-cpu.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-cuda.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-metal.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-opencl.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-opt.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-rpc.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-sycl.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-vulkan.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml-zdnn.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ggml.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/gguf.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/include/ollama-debug.h creating: ollama-0.12.5/ml/backend/ggml/ggml/src/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/CMakeLists.txt inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-alloc.c inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-backend-impl.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-backend-reg.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-backend.cpp creating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-blas/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-blas/CMakeLists.txt inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-blas/blas.go inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-blas/ggml-blas.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-common.h creating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/CMakeLists.txt creating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/common.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch-fallback.h creating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/ creating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/arm/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/arm/arm.go inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/arm/cpu-feats.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/arm/quants.c inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/arm/repack.cpp creating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/cpu-feats.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/x86.go inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/cpu.go inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/cpu_amd64.go inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/cpu_arm64.go extracting: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/cpu_debug.go inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/hbm.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/hbm.h creating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/llamafile.go inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/quants.c inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/quants.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/simd-mappings.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.h creating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/CMakeLists.txt inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/acc.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/acc.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/add-id.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/add-id.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/arange.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/arange.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/argmax.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/argmax.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/argsort.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/argsort.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/binbcast.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/binbcast.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/clamp.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/clamp.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/common.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/concat.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/concat.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/conv-transpose-1d.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/conv-transpose-1d.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/conv2d-dw.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/conv2d-dw.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/conv2d-transpose.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/conv2d-transpose.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/conv2d.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/conv2d.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/convert.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/convert.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/count-equal.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/count-equal.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/cp-async.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/cpy-utils.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/cpy.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/cpy.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/cross-entropy-loss.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/cross-entropy-loss.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/dequantize.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/diagmask.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/diagmask.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/fattn-common.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/fattn-mma-f16.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/fattn-tile.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/fattn-tile.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/fattn-vec.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/fattn.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/fattn.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/getrows.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/getrows.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/ggml-cuda.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/gla.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/gla.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/im2col.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/im2col.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/mean.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/mean.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/mma.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/mmf.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/mmf.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/mmq.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/mmq.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/mmvf.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/mmvf.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/mmvq.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/mmvq.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/norm.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/norm.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/opt-step-adamw.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/opt-step-adamw.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/opt-step-sgd.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/opt-step-sgd.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/out-prod.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/out-prod.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/pad.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/pad.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/pad_reflect_1d.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/pad_reflect_1d.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/pool2d.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/pool2d.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/quantize.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/quantize.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/reduce_rows.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/roll.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/roll.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/rope.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/rope.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/scale.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/scale.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/set-rows.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/set-rows.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/softcap.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/softcap.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/softmax.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/softmax.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/ssm-conv.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/ssm-conv.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/ssm-scan.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/ssm-scan.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/sum.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/sum.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/sumrows.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/sumrows.cuh creating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_16.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_1-ncols2_8.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_2.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_16-ncols2_4.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_16.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_4.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_2-ncols2_8.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_32-ncols2_2.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_16.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_2.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_4.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_4-ncols2_8.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_64-ncols2_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_2.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_4.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-mma-f16-instance-ncols1_8-ncols2_8.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-f16-f16.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-f16-q4_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-f16-q4_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-f16-q5_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-f16-q5_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-f16-q8_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_0-f16.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_0-q4_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_0-q4_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_0-q5_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_0-q5_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_0-q8_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_1-f16.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_1-q4_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_1-q4_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_1-q5_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_1-q5_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q4_1-q8_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_0-f16.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_0-q4_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_0-q4_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_0-q5_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_0-q5_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_0-q8_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_1-f16.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_1-q4_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_1-q4_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_1-q5_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_1-q5_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q5_1-q8_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q8_0-f16.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q8_0-q4_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q8_0-q4_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q8_0-q5_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q8_0-q5_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/fattn-vec-instance-q8_0-q8_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_10.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_11.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_12.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_13.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_14.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_15.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_16.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_2.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_3.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_4.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_5.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_6.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_7.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_8.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmf-instance-ncols_9.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq1_s.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq2_s.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq2_xs.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq2_xxs.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq3_s.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq3_xxs.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq4_nl.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-iq4_xs.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-mxfp4.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q2_k.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q3_k.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q4_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q4_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q4_k.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q5_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q5_1.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q5_k.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q6_k.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/template-instances/mmq-instance-q8_0.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/topk-moe.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/topk-moe.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/tsembd.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/tsembd.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/unary.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/unary.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/upscale.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/upscale.cuh inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/vecdotq.cuh creating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/vendors/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/vendors/cuda.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/vendors/hip.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/vendors/musa.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/wkv.cu inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cuda/wkv.cuh creating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-hip/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-hip/CMakeLists.txt inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h creating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/CMakeLists.txt inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-common.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-common.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-context.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-context.m inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-device.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-device.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-device.m inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-embed.metal inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-embed.s inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-impl.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-ops.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-ops.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/ggml-metal.metal inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-metal/metal.go inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-opt.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-quants.c inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-quants.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-threading.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-threading.h inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml.c inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml.go inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ggml_darwin_arm64.go inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/gguf.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/mem_hip.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/mem_nvml.cpp inflating: ollama-0.12.5/ml/backend/ggml/ggml/src/ollama-debug.c inflating: ollama-0.12.5/ml/backend/ggml/quantization.go inflating: ollama-0.12.5/ml/backend/ggml/threads.go inflating: ollama-0.12.5/ml/backend/ggml/threads_debug.go inflating: ollama-0.12.5/ml/device.go creating: ollama-0.12.5/ml/nn/ inflating: ollama-0.12.5/ml/nn/attention.go inflating: ollama-0.12.5/ml/nn/convolution.go inflating: ollama-0.12.5/ml/nn/embedding.go creating: ollama-0.12.5/ml/nn/fast/ inflating: ollama-0.12.5/ml/nn/fast/rope.go inflating: ollama-0.12.5/ml/nn/linear.go inflating: ollama-0.12.5/ml/nn/normalization.go creating: ollama-0.12.5/ml/nn/pooling/ inflating: ollama-0.12.5/ml/nn/pooling/pooling.go inflating: ollama-0.12.5/ml/nn/pooling/pooling_test.go creating: ollama-0.12.5/ml/nn/rope/ inflating: ollama-0.12.5/ml/nn/rope/rope.go creating: ollama-0.12.5/model/ inflating: ollama-0.12.5/model/bytepairencoding.go inflating: ollama-0.12.5/model/bytepairencoding_test.go creating: ollama-0.12.5/model/imageproc/ inflating: ollama-0.12.5/model/imageproc/images.go inflating: ollama-0.12.5/model/imageproc/images_test.go creating: ollama-0.12.5/model/input/ inflating: ollama-0.12.5/model/input/input.go inflating: ollama-0.12.5/model/model.go inflating: ollama-0.12.5/model/model_test.go creating: ollama-0.12.5/model/models/ creating: ollama-0.12.5/model/models/bert/ inflating: ollama-0.12.5/model/models/bert/embed.go creating: ollama-0.12.5/model/models/deepseek2/ inflating: ollama-0.12.5/model/models/deepseek2/model.go creating: ollama-0.12.5/model/models/gemma2/ inflating: ollama-0.12.5/model/models/gemma2/model.go creating: ollama-0.12.5/model/models/gemma3/ inflating: ollama-0.12.5/model/models/gemma3/embed.go inflating: ollama-0.12.5/model/models/gemma3/model.go inflating: ollama-0.12.5/model/models/gemma3/model_text.go inflating: ollama-0.12.5/model/models/gemma3/model_vision.go inflating: ollama-0.12.5/model/models/gemma3/process_image.go creating: ollama-0.12.5/model/models/gemma3n/ inflating: ollama-0.12.5/model/models/gemma3n/model.go inflating: ollama-0.12.5/model/models/gemma3n/model_text.go creating: ollama-0.12.5/model/models/gptoss/ inflating: ollama-0.12.5/model/models/gptoss/model.go creating: ollama-0.12.5/model/models/llama/ inflating: ollama-0.12.5/model/models/llama/model.go creating: ollama-0.12.5/model/models/llama4/ inflating: ollama-0.12.5/model/models/llama4/model.go inflating: ollama-0.12.5/model/models/llama4/model_text.go inflating: ollama-0.12.5/model/models/llama4/model_vision.go inflating: ollama-0.12.5/model/models/llama4/process_image.go inflating: ollama-0.12.5/model/models/llama4/process_image_test.go creating: ollama-0.12.5/model/models/mistral3/ inflating: ollama-0.12.5/model/models/mistral3/imageproc.go inflating: ollama-0.12.5/model/models/mistral3/model.go inflating: ollama-0.12.5/model/models/mistral3/model_text.go inflating: ollama-0.12.5/model/models/mistral3/model_vision.go creating: ollama-0.12.5/model/models/mllama/ inflating: ollama-0.12.5/model/models/mllama/model.go inflating: ollama-0.12.5/model/models/mllama/model_text.go inflating: ollama-0.12.5/model/models/mllama/model_vision.go inflating: ollama-0.12.5/model/models/mllama/process_image.go inflating: ollama-0.12.5/model/models/mllama/process_image_test.go inflating: ollama-0.12.5/model/models/models.go creating: ollama-0.12.5/model/models/qwen2/ inflating: ollama-0.12.5/model/models/qwen2/model.go creating: ollama-0.12.5/model/models/qwen25vl/ inflating: ollama-0.12.5/model/models/qwen25vl/model.go inflating: ollama-0.12.5/model/models/qwen25vl/model_text.go inflating: ollama-0.12.5/model/models/qwen25vl/model_vision.go inflating: ollama-0.12.5/model/models/qwen25vl/process_image.go creating: ollama-0.12.5/model/models/qwen3/ inflating: ollama-0.12.5/model/models/qwen3/embed.go inflating: ollama-0.12.5/model/models/qwen3/model.go creating: ollama-0.12.5/model/parsers/ inflating: ollama-0.12.5/model/parsers/parsers.go inflating: ollama-0.12.5/model/parsers/qwen3coder.go inflating: ollama-0.12.5/model/parsers/qwen3coder_test.go creating: ollama-0.12.5/model/renderers/ inflating: ollama-0.12.5/model/renderers/qwen3coder.go inflating: ollama-0.12.5/model/renderers/qwen3coder_test.go inflating: ollama-0.12.5/model/renderers/renderer.go inflating: ollama-0.12.5/model/sentencepiece.go inflating: ollama-0.12.5/model/sentencepiece_test.go creating: ollama-0.12.5/model/testdata/ creating: ollama-0.12.5/model/testdata/gemma2/ inflating: ollama-0.12.5/model/testdata/gemma2/tokenizer.model creating: ollama-0.12.5/model/testdata/llama3.2/ inflating: ollama-0.12.5/model/testdata/llama3.2/encoder.json inflating: ollama-0.12.5/model/testdata/llama3.2/vocab.bpe inflating: ollama-0.12.5/model/testdata/war-and-peace.txt inflating: ollama-0.12.5/model/textprocessor.go inflating: ollama-0.12.5/model/vocabulary.go inflating: ollama-0.12.5/model/vocabulary_test.go inflating: ollama-0.12.5/model/wordpiece.go inflating: ollama-0.12.5/model/wordpiece_test.go creating: ollama-0.12.5/openai/ inflating: ollama-0.12.5/openai/openai.go inflating: ollama-0.12.5/openai/openai_test.go creating: ollama-0.12.5/parser/ inflating: ollama-0.12.5/parser/expandpath_test.go inflating: ollama-0.12.5/parser/parser.go inflating: ollama-0.12.5/parser/parser_test.go creating: ollama-0.12.5/progress/ inflating: ollama-0.12.5/progress/bar.go inflating: ollama-0.12.5/progress/progress.go inflating: ollama-0.12.5/progress/spinner.go creating: ollama-0.12.5/readline/ inflating: ollama-0.12.5/readline/buffer.go inflating: ollama-0.12.5/readline/errors.go inflating: ollama-0.12.5/readline/history.go inflating: ollama-0.12.5/readline/readline.go inflating: ollama-0.12.5/readline/readline_unix.go inflating: ollama-0.12.5/readline/readline_windows.go inflating: ollama-0.12.5/readline/term.go inflating: ollama-0.12.5/readline/term_bsd.go inflating: ollama-0.12.5/readline/term_linux.go inflating: ollama-0.12.5/readline/term_windows.go inflating: ollama-0.12.5/readline/types.go creating: ollama-0.12.5/runner/ inflating: ollama-0.12.5/runner/README.md creating: ollama-0.12.5/runner/common/ inflating: ollama-0.12.5/runner/common/stop.go inflating: ollama-0.12.5/runner/common/stop_test.go creating: ollama-0.12.5/runner/llamarunner/ inflating: ollama-0.12.5/runner/llamarunner/cache.go inflating: ollama-0.12.5/runner/llamarunner/cache_test.go inflating: ollama-0.12.5/runner/llamarunner/image.go inflating: ollama-0.12.5/runner/llamarunner/image_test.go inflating: ollama-0.12.5/runner/llamarunner/runner.go creating: ollama-0.12.5/runner/ollamarunner/ inflating: ollama-0.12.5/runner/ollamarunner/cache.go inflating: ollama-0.12.5/runner/ollamarunner/cache_test.go inflating: ollama-0.12.5/runner/ollamarunner/multimodal.go inflating: ollama-0.12.5/runner/ollamarunner/runner.go inflating: ollama-0.12.5/runner/runner.go creating: ollama-0.12.5/sample/ inflating: ollama-0.12.5/sample/samplers.go inflating: ollama-0.12.5/sample/samplers_benchmark_test.go inflating: ollama-0.12.5/sample/samplers_test.go inflating: ollama-0.12.5/sample/transforms.go inflating: ollama-0.12.5/sample/transforms_test.go creating: ollama-0.12.5/scripts/ inflating: ollama-0.12.5/scripts/build_darwin.sh inflating: ollama-0.12.5/scripts/build_docker.sh inflating: ollama-0.12.5/scripts/build_linux.sh inflating: ollama-0.12.5/scripts/build_windows.ps1 inflating: ollama-0.12.5/scripts/env.sh inflating: ollama-0.12.5/scripts/install.sh inflating: ollama-0.12.5/scripts/push_docker.sh inflating: ollama-0.12.5/scripts/tag_latest.sh creating: ollama-0.12.5/server/ inflating: ollama-0.12.5/server/auth.go inflating: ollama-0.12.5/server/create.go inflating: ollama-0.12.5/server/create_test.go inflating: ollama-0.12.5/server/download.go inflating: ollama-0.12.5/server/fixblobs.go inflating: ollama-0.12.5/server/fixblobs_test.go inflating: ollama-0.12.5/server/images.go inflating: ollama-0.12.5/server/images_test.go creating: ollama-0.12.5/server/internal/ creating: ollama-0.12.5/server/internal/cache/ creating: ollama-0.12.5/server/internal/cache/blob/ inflating: ollama-0.12.5/server/internal/cache/blob/cache.go inflating: ollama-0.12.5/server/internal/cache/blob/cache_test.go inflating: ollama-0.12.5/server/internal/cache/blob/casecheck_test.go inflating: ollama-0.12.5/server/internal/cache/blob/chunked.go inflating: ollama-0.12.5/server/internal/cache/blob/digest.go inflating: ollama-0.12.5/server/internal/cache/blob/digest_test.go creating: ollama-0.12.5/server/internal/client/ creating: ollama-0.12.5/server/internal/client/ollama/ inflating: ollama-0.12.5/server/internal/client/ollama/registry.go inflating: ollama-0.12.5/server/internal/client/ollama/registry_synctest_test.go inflating: ollama-0.12.5/server/internal/client/ollama/registry_test.go inflating: ollama-0.12.5/server/internal/client/ollama/trace.go creating: ollama-0.12.5/server/internal/internal/ creating: ollama-0.12.5/server/internal/internal/backoff/ inflating: ollama-0.12.5/server/internal/internal/backoff/backoff.go inflating: ollama-0.12.5/server/internal/internal/backoff/backoff_synctest_test.go inflating: ollama-0.12.5/server/internal/internal/backoff/backoff_test.go creating: ollama-0.12.5/server/internal/internal/names/ inflating: ollama-0.12.5/server/internal/internal/names/name.go inflating: ollama-0.12.5/server/internal/internal/names/name_test.go creating: ollama-0.12.5/server/internal/internal/stringsx/ inflating: ollama-0.12.5/server/internal/internal/stringsx/stringsx.go inflating: ollama-0.12.5/server/internal/internal/stringsx/stringsx_test.go creating: ollama-0.12.5/server/internal/internal/syncs/ inflating: ollama-0.12.5/server/internal/internal/syncs/line.go inflating: ollama-0.12.5/server/internal/internal/syncs/line_test.go inflating: ollama-0.12.5/server/internal/internal/syncs/syncs.go creating: ollama-0.12.5/server/internal/manifest/ inflating: ollama-0.12.5/server/internal/manifest/manifest.go creating: ollama-0.12.5/server/internal/registry/ inflating: ollama-0.12.5/server/internal/registry/server.go inflating: ollama-0.12.5/server/internal/registry/server_test.go creating: ollama-0.12.5/server/internal/registry/testdata/ creating: ollama-0.12.5/server/internal/registry/testdata/models/ creating: ollama-0.12.5/server/internal/registry/testdata/models/blobs/ inflating: ollama-0.12.5/server/internal/registry/testdata/models/blobs/sha256-a4e5e156ddec27e286f75328784d7106b60a4eb1d246e950a001a3f944fbda99 inflating: ollama-0.12.5/server/internal/registry/testdata/models/blobs/sha256-ecfb1acfca9c76444d622fcdc3840217bd502124a9d3687d438c19b3cb9c3cb1 creating: ollama-0.12.5/server/internal/registry/testdata/models/manifests/ creating: ollama-0.12.5/server/internal/registry/testdata/models/manifests/example.com/ creating: ollama-0.12.5/server/internal/registry/testdata/models/manifests/example.com/library/ creating: ollama-0.12.5/server/internal/registry/testdata/models/manifests/example.com/library/smol/ inflating: ollama-0.12.5/server/internal/registry/testdata/models/manifests/example.com/library/smol/latest inflating: ollama-0.12.5/server/internal/registry/testdata/registry.txt creating: ollama-0.12.5/server/internal/testutil/ inflating: ollama-0.12.5/server/internal/testutil/testutil.go inflating: ollama-0.12.5/server/layer.go inflating: ollama-0.12.5/server/manifest.go inflating: ollama-0.12.5/server/manifest_test.go inflating: ollama-0.12.5/server/model.go inflating: ollama-0.12.5/server/modelpath.go inflating: ollama-0.12.5/server/modelpath_test.go inflating: ollama-0.12.5/server/prompt.go inflating: ollama-0.12.5/server/prompt_test.go inflating: ollama-0.12.5/server/quantization.go inflating: ollama-0.12.5/server/quantization_test.go inflating: ollama-0.12.5/server/routes.go inflating: ollama-0.12.5/server/routes_create_test.go inflating: ollama-0.12.5/server/routes_debug_test.go inflating: ollama-0.12.5/server/routes_delete_test.go inflating: ollama-0.12.5/server/routes_generate_test.go inflating: ollama-0.12.5/server/routes_harmony_streaming_test.go inflating: ollama-0.12.5/server/routes_list_test.go inflating: ollama-0.12.5/server/routes_test.go inflating: ollama-0.12.5/server/sched.go inflating: ollama-0.12.5/server/sched_test.go extracting: ollama-0.12.5/server/sparse_common.go inflating: ollama-0.12.5/server/sparse_windows.go inflating: ollama-0.12.5/server/upload.go creating: ollama-0.12.5/template/ inflating: ollama-0.12.5/template/alfred.gotmpl inflating: ollama-0.12.5/template/alfred.json inflating: ollama-0.12.5/template/alpaca.gotmpl inflating: ollama-0.12.5/template/alpaca.json inflating: ollama-0.12.5/template/chatml.gotmpl inflating: ollama-0.12.5/template/chatml.json inflating: ollama-0.12.5/template/chatqa.gotmpl inflating: ollama-0.12.5/template/chatqa.json inflating: ollama-0.12.5/template/codellama-70b-instruct.gotmpl inflating: ollama-0.12.5/template/codellama-70b-instruct.json inflating: ollama-0.12.5/template/command-r.gotmpl inflating: ollama-0.12.5/template/command-r.json inflating: ollama-0.12.5/template/falcon-instruct.gotmpl inflating: ollama-0.12.5/template/falcon-instruct.json inflating: ollama-0.12.5/template/gemma-instruct.gotmpl inflating: ollama-0.12.5/template/gemma-instruct.json inflating: ollama-0.12.5/template/gemma3-instruct.gotmpl inflating: ollama-0.12.5/template/gemma3-instruct.json inflating: ollama-0.12.5/template/granite-instruct.gotmpl inflating: ollama-0.12.5/template/granite-instruct.json inflating: ollama-0.12.5/template/index.json inflating: ollama-0.12.5/template/llama2-chat.gotmpl inflating: ollama-0.12.5/template/llama2-chat.json inflating: ollama-0.12.5/template/llama3-instruct.gotmpl inflating: ollama-0.12.5/template/llama3-instruct.json inflating: ollama-0.12.5/template/magicoder.gotmpl inflating: ollama-0.12.5/template/magicoder.json inflating: ollama-0.12.5/template/mistral-instruct.gotmpl inflating: ollama-0.12.5/template/mistral-instruct.json inflating: ollama-0.12.5/template/openchat.gotmpl inflating: ollama-0.12.5/template/openchat.json inflating: ollama-0.12.5/template/phi-3.gotmpl inflating: ollama-0.12.5/template/phi-3.json inflating: ollama-0.12.5/template/solar-instruct.gotmpl inflating: ollama-0.12.5/template/solar-instruct.json inflating: ollama-0.12.5/template/starcoder2-instruct.gotmpl inflating: ollama-0.12.5/template/starcoder2-instruct.json inflating: ollama-0.12.5/template/template.go inflating: ollama-0.12.5/template/template_test.go creating: ollama-0.12.5/template/testdata/ creating: ollama-0.12.5/template/testdata/alfred.gotmpl/ inflating: ollama-0.12.5/template/testdata/alfred.gotmpl/system-user-assistant-user inflating: ollama-0.12.5/template/testdata/alfred.gotmpl/user inflating: ollama-0.12.5/template/testdata/alfred.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/alpaca.gotmpl/ inflating: ollama-0.12.5/template/testdata/alpaca.gotmpl/system-user-assistant-user extracting: ollama-0.12.5/template/testdata/alpaca.gotmpl/user inflating: ollama-0.12.5/template/testdata/alpaca.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/chatml.gotmpl/ inflating: ollama-0.12.5/template/testdata/chatml.gotmpl/system-user-assistant-user inflating: ollama-0.12.5/template/testdata/chatml.gotmpl/user inflating: ollama-0.12.5/template/testdata/chatml.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/chatqa.gotmpl/ inflating: ollama-0.12.5/template/testdata/chatqa.gotmpl/system-user-assistant-user extracting: ollama-0.12.5/template/testdata/chatqa.gotmpl/user inflating: ollama-0.12.5/template/testdata/chatqa.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/codellama-70b-instruct.gotmpl/ inflating: ollama-0.12.5/template/testdata/codellama-70b-instruct.gotmpl/system-user-assistant-user inflating: ollama-0.12.5/template/testdata/codellama-70b-instruct.gotmpl/user inflating: ollama-0.12.5/template/testdata/codellama-70b-instruct.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/command-r.gotmpl/ inflating: ollama-0.12.5/template/testdata/command-r.gotmpl/system-user-assistant-user inflating: ollama-0.12.5/template/testdata/command-r.gotmpl/user inflating: ollama-0.12.5/template/testdata/command-r.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/falcon-instruct.gotmpl/ inflating: ollama-0.12.5/template/testdata/falcon-instruct.gotmpl/system-user-assistant-user extracting: ollama-0.12.5/template/testdata/falcon-instruct.gotmpl/user inflating: ollama-0.12.5/template/testdata/falcon-instruct.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/gemma-instruct.gotmpl/ inflating: ollama-0.12.5/template/testdata/gemma-instruct.gotmpl/system-user-assistant-user inflating: ollama-0.12.5/template/testdata/gemma-instruct.gotmpl/user inflating: ollama-0.12.5/template/testdata/gemma-instruct.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/gemma3-instruct.gotmpl/ inflating: ollama-0.12.5/template/testdata/gemma3-instruct.gotmpl/system-user-assistant-user inflating: ollama-0.12.5/template/testdata/gemma3-instruct.gotmpl/user inflating: ollama-0.12.5/template/testdata/gemma3-instruct.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/granite-instruct.gotmpl/ inflating: ollama-0.12.5/template/testdata/granite-instruct.gotmpl/system-user-assistant-user extracting: ollama-0.12.5/template/testdata/granite-instruct.gotmpl/user inflating: ollama-0.12.5/template/testdata/granite-instruct.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/llama2-chat.gotmpl/ inflating: ollama-0.12.5/template/testdata/llama2-chat.gotmpl/system-user-assistant-user inflating: ollama-0.12.5/template/testdata/llama2-chat.gotmpl/user inflating: ollama-0.12.5/template/testdata/llama2-chat.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/llama3-instruct.gotmpl/ inflating: ollama-0.12.5/template/testdata/llama3-instruct.gotmpl/system-user-assistant-user inflating: ollama-0.12.5/template/testdata/llama3-instruct.gotmpl/user inflating: ollama-0.12.5/template/testdata/llama3-instruct.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/magicoder.gotmpl/ inflating: ollama-0.12.5/template/testdata/magicoder.gotmpl/system-user-assistant-user extracting: ollama-0.12.5/template/testdata/magicoder.gotmpl/user inflating: ollama-0.12.5/template/testdata/magicoder.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/mistral-instruct.gotmpl/ inflating: ollama-0.12.5/template/testdata/mistral-instruct.gotmpl/system-user-assistant-user inflating: ollama-0.12.5/template/testdata/mistral-instruct.gotmpl/user inflating: ollama-0.12.5/template/testdata/mistral-instruct.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/openchat.gotmpl/ inflating: ollama-0.12.5/template/testdata/openchat.gotmpl/system-user-assistant-user inflating: ollama-0.12.5/template/testdata/openchat.gotmpl/user inflating: ollama-0.12.5/template/testdata/openchat.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/phi-3.gotmpl/ inflating: ollama-0.12.5/template/testdata/phi-3.gotmpl/system-user-assistant-user inflating: ollama-0.12.5/template/testdata/phi-3.gotmpl/user inflating: ollama-0.12.5/template/testdata/phi-3.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/solar-instruct.gotmpl/ inflating: ollama-0.12.5/template/testdata/solar-instruct.gotmpl/system-user-assistant-user extracting: ollama-0.12.5/template/testdata/solar-instruct.gotmpl/user inflating: ollama-0.12.5/template/testdata/solar-instruct.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/starcoder2-instruct.gotmpl/ inflating: ollama-0.12.5/template/testdata/starcoder2-instruct.gotmpl/system-user-assistant-user extracting: ollama-0.12.5/template/testdata/starcoder2-instruct.gotmpl/user inflating: ollama-0.12.5/template/testdata/starcoder2-instruct.gotmpl/user-assistant-user inflating: ollama-0.12.5/template/testdata/templates.jsonl creating: ollama-0.12.5/template/testdata/vicuna.gotmpl/ inflating: ollama-0.12.5/template/testdata/vicuna.gotmpl/system-user-assistant-user extracting: ollama-0.12.5/template/testdata/vicuna.gotmpl/user inflating: ollama-0.12.5/template/testdata/vicuna.gotmpl/user-assistant-user creating: ollama-0.12.5/template/testdata/zephyr.gotmpl/ inflating: ollama-0.12.5/template/testdata/zephyr.gotmpl/system-user-assistant-user extracting: ollama-0.12.5/template/testdata/zephyr.gotmpl/user inflating: ollama-0.12.5/template/testdata/zephyr.gotmpl/user-assistant-user inflating: ollama-0.12.5/template/vicuna.gotmpl inflating: ollama-0.12.5/template/vicuna.json inflating: ollama-0.12.5/template/zephyr.gotmpl inflating: ollama-0.12.5/template/zephyr.json creating: ollama-0.12.5/thinking/ inflating: ollama-0.12.5/thinking/parser.go inflating: ollama-0.12.5/thinking/parser_test.go inflating: ollama-0.12.5/thinking/template.go inflating: ollama-0.12.5/thinking/template_test.go creating: ollama-0.12.5/tools/ inflating: ollama-0.12.5/tools/template.go inflating: ollama-0.12.5/tools/template_test.go inflating: ollama-0.12.5/tools/tools.go inflating: ollama-0.12.5/tools/tools_test.go creating: ollama-0.12.5/types/ creating: ollama-0.12.5/types/errtypes/ inflating: ollama-0.12.5/types/errtypes/errtypes.go creating: ollama-0.12.5/types/model/ inflating: ollama-0.12.5/types/model/capability.go inflating: ollama-0.12.5/types/model/name.go inflating: ollama-0.12.5/types/model/name_test.go creating: ollama-0.12.5/types/model/testdata/ creating: ollama-0.12.5/types/model/testdata/fuzz/ creating: ollama-0.12.5/types/model/testdata/fuzz/FuzzName/ extracting: ollama-0.12.5/types/model/testdata/fuzz/FuzzName/d37463aa416f6bab creating: ollama-0.12.5/types/syncmap/ inflating: ollama-0.12.5/types/syncmap/syncmap.go creating: ollama-0.12.5/version/ inflating: ollama-0.12.5/version/version.go + STATUS=0 + '[' 0 -ne 0 ']' + cd ollama-0.12.5 + /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w . + cd /builddir/build/BUILD/ollama-0.12.5-build + cd ollama-0.12.5 + /usr/lib/rpm/rpmuncompress -x -v /builddir/build/SOURCES/main.zip TZ=UTC /usr/bin/unzip -u '/builddir/build/SOURCES/main.zip' Archive: /builddir/build/SOURCES/main.zip 6b065b422692c6a1b4502c7cfe8aeed263ddc1b5 creating: ollamad-main/ inflating: ollamad-main/README.md inflating: ollamad-main/ollamad.conf inflating: ollamad-main/ollamad.service creating: ollamad-main/packaging/ inflating: ollamad-main/packaging/ollama.spec + STATUS=0 + '[' 0 -ne 0 ']' + /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w . + RPM_EC=0 ++ jobs -p + exit 0 Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.SIbrEj + umask 022 + cd /builddir/build/BUILD/ollama-0.12.5-build + CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CFLAGS + CXXFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CXXFLAGS + FFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FFLAGS + FCFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FCFLAGS + VALAFLAGS=-g + export VALAFLAGS + RUSTFLAGS='-Copt-level=3 -Cdebuginfo=2 -Ccodegen-units=1 -Cstrip=none -Cforce-frame-pointers=yes -Clink-arg=-specs=/usr/lib/rpm/redhat/redhat-package-notes --cap-lints=warn' + export RUSTFLAGS + LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes ' + export LDFLAGS + LT_SYS_LIBRARY_PATH=/usr/lib64: + export LT_SYS_LIBRARY_PATH + CC=gcc + export CC + CXX=g++ + export CXX + cd ollama-0.12.5 + export PATH=/usr/lib64/ccache:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/cuda/bin + PATH=/usr/lib64/ccache:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/cuda/bin + export GIN_MODE=release + GIN_MODE=release + cmake -B /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5 -DPATH=/usr/lib64/ccache:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/cuda/bin:/usr/local/cuda/bin -DCUDACXX=/usr/local/cuda/bin/nvcc -DGIN_MODE=release -DGGML_CCACHE=OFF -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda -- The C compiler identification is GNU 15.2.1 -- The CXX compiler identification is GNU 15.2.1 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/lib64/ccache/gcc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/lib64/ccache/g++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- ccache found, compilation results will be cached. Disable with GGML_CCACHE=OFF. -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- GGML_SYSTEM_ARCH: x86 -- Including CPU backend -- x86 detected -- Adding CPU backend variant ggml-cpu-x64: -- x86 detected -- Adding CPU backend variant ggml-cpu-sse42: -msse4.2 GGML_SSE42 -- x86 detected -- Adding CPU backend variant ggml-cpu-sandybridge: -msse4.2;-mavx GGML_SSE42;GGML_AVX -- x86 detected -- Adding CPU backend variant ggml-cpu-haswell: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2 GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2 -- x86 detected -- Adding CPU backend variant ggml-cpu-skylakex: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512 -- x86 detected -- Adding CPU backend variant ggml-cpu-icelake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavx512f;-mavx512cd;-mavx512vl;-mavx512dq;-mavx512bw;-mavx512vbmi;-mavx512vnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX512;GGML_AVX512_VBMI;GGML_AVX512_VNNI -- x86 detected -- Adding CPU backend variant ggml-cpu-alderlake: -msse4.2;-mf16c;-mfma;-mbmi2;-mavx;-mavx2;-mavxvnni GGML_SSE42;GGML_F16C;GGML_FMA;GGML_BMI2;GGML_AVX;GGML_AVX2;GGML_AVX_VNNI -- Looking for a CUDA compiler -- Looking for a CUDA compiler - NOTFOUND -- Looking for a HIP compiler -- Looking for a HIP compiler - NOTFOUND -- Configuring done (1.2s) -- Generating done (0.0s) CMake Warning: Manually-specified variables were not used by the project: CUDACXX CUDA_TOOLKIT_ROOT_DIR GIN_MODE PATH -- Build files have been written to: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5 + cmake --build /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5 [ 1%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml.c:5851:13: warning: ‘ggml_hash_map_free’ defined but not used [-Wunused-function] 5851 | static void ggml_hash_map_free(struct hash_map * map) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml.c:5844:26: warning: ‘ggml_new_hash_map’ defined but not used [-Wunused-function] 5844 | static struct hash_map * ggml_new_hash_map(size_t size) { | ^~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml.c:5: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 2%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 2%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-alloc.c:420:12: warning: ‘ggml_vbuffer_n_chunks’ defined but not used [-Wunused-function] 420 | static int ggml_vbuffer_n_chunks(struct vbuffer * buf) { | ^~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-alloc.c:108:13: warning: ‘ggml_buffer_address_less’ defined but not used [-Wunused-function] 108 | static bool ggml_buffer_address_less(struct buffer_address a, struct buffer_address b) { | ^~~~~~~~~~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-alloc.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ [ 3%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-backend.cpp:14: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ [ 4%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-opt.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 5%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o [ 5%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-quants.c:4068:12: warning: ‘iq1_find_best_neighbour’ defined but not used [-Wunused-function] 4068 | static int iq1_find_best_neighbour(const uint16_t * GGML_RESTRICT neighbours, const uint64_t * GGML_RESTRICT grid, | ^~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-quants.c:579:14: warning: ‘make_qkx1_quants’ defined but not used [-Wunused-function] 579 | static float make_qkx1_quants(int n, int nmax, const float * GGML_RESTRICT x, uint8_t * GGML_RESTRICT L, float * GGML_RESTRICT the_min, | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-quants.c:5: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 6%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/mem_hip.cpp.o [ 7%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/mem_nvml.cpp.o /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/mem_nvml.cpp: In function ‘int ggml_nvml_init()’: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/mem_nvml.cpp:131:9: warning: unused variable ‘status’ [-Wunused-variable] 131 | int status = nvml.nvmlInit_v2(); | ^~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/mem_nvml.cpp:12: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h: At global scope: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 8%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/gguf.cpp:3: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 8%] Linking CXX shared library ../../../../../lib/ollama/libggml-base.so [ 8%] Built target ggml-base [ 9%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 9%] Built target ggml-cpu-x64-feats [ 9%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 10%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 11%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 12%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/hbm.cpp.o [ 12%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 13%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/traits.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 14%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/amx.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 15%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/mmq.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 15%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/binary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 16%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/unary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 17%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/vec.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 18%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:71:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 71 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 18%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/llamafile/sgemm.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 19%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 20%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/arch/x86/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 21%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-x64.so [ 21%] Built target ggml-cpu-x64 [ 22%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 22%] Built target ggml-cpu-sse42-feats [ 23%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 23%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 24%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 25%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/hbm.cpp.o [ 26%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 26%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/traits.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 27%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/amx.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 28%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/mmq.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 29%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/binary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 29%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/unary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 30%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/vec.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 31%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:71:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 71 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 32%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/llamafile/sgemm.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 32%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 33%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/arch/x86/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 34%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-sse42.so [ 34%] Built target ggml-cpu-sse42 [ 35%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 35%] Built target ggml-cpu-sandybridge-feats [ 36%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 37%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 37%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 38%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/hbm.cpp.o [ 39%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 40%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/traits.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 40%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/amx.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 41%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/mmq.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 42%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/binary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 43%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/unary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 43%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/vec.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 44%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:71:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 71 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 45%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/llamafile/sgemm.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 46%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 46%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/arch/x86/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 47%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-sandybridge.so [ 47%] Built target ggml-cpu-sandybridge [ 48%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 48%] Built target ggml-cpu-haswell-feats [ 49%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 50%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 51%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 51%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/hbm.cpp.o [ 52%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 53%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/traits.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 54%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/amx.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 54%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/mmq.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 55%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/binary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 56%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/unary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 57%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/vec.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 57%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:71:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 71 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 58%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/llamafile/sgemm.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 59%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 60%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/arch/x86/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 60%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-haswell.so [ 60%] Built target ggml-cpu-haswell [ 61%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 61%] Built target ggml-cpu-skylakex-feats [ 62%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 62%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 63%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 64%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/hbm.cpp.o [ 65%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 65%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/traits.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 66%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/amx.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 67%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/mmq.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 68%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/binary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 68%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/unary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 69%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/vec.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 70%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:71:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 71 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 71%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/llamafile/sgemm.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 71%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 72%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/arch/x86/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 73%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-skylakex.so [ 73%] Built target ggml-cpu-skylakex [ 74%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 74%] Built target ggml-cpu-icelake-feats [ 75%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 76%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 76%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 77%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/hbm.cpp.o [ 78%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 79%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/traits.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 79%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/amx.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 80%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/mmq.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 81%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/binary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 82%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/unary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 82%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/vec.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 83%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:71:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 71 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 84%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/llamafile/sgemm.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 85%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 85%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/arch/x86/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 86%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-icelake.so [ 86%] Built target ggml-cpu-icelake [ 87%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/ggml-cpu/arch/x86/cpu-feats.cpp.o [ 87%] Built target ggml-cpu-alderlake-feats [ 88%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.c:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 89%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 90%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 90%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/hbm.cpp.o [ 91%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 92%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/traits.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 93%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/amx.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 93%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/mmq.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/amx.h:2, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/amx/mmq.cpp:7: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 94%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/binary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 95%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/unary-ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ggml-cpu-impl.h:6, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/traits.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:4, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/unary-ops.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 96%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/vec.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.h:5, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/vec.cpp:1: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 96%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ops.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/binary-ops.h:3, from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:5: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/common.h:71:36: warning: ‘std::pair get_thread_range(const ggml_compute_params*, const ggml_tensor*)’ defined but not used [-Wunused-function] 71 | static std::pair get_thread_range(const struct ggml_compute_params * params, const struct ggml_tensor * src0) { | ^~~~~~~~~~~~~~~~ In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/ops.cpp:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 97%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/llamafile/sgemm.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/llamafile/sgemm.cpp:52: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [ 98%] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/quants.c.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/quants.c:4: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘ggml_hash_find_or_insert’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘ggml_hash_insert’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘ggml_hash_contains’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘ggml_bitset_size’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘ggml_set_op_params_f32’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘ggml_set_op_params_i32’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘ggml_get_op_params_f32’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘ggml_get_op_params_i32’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘ggml_set_op_params’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘ggml_op_is_empty’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘ggml_are_same_layout’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [100%] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/arch/x86/repack.cpp.o In file included from /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-cpu/arch/x86/repack.cpp:6: /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:295:15: warning: ‘size_t ggml_hash_find_or_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 295 | static size_t ggml_hash_find_or_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:274:15: warning: ‘size_t ggml_hash_insert(ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 274 | static size_t ggml_hash_insert(struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:269:13: warning: ‘bool ggml_hash_contains(const ggml_hash_set*, ggml_tensor*)’ defined but not used [-Wunused-function] 269 | static bool ggml_hash_contains(const struct ggml_hash_set * hash_set, struct ggml_tensor * key) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:200:15: warning: ‘size_t ggml_bitset_size(size_t)’ defined but not used [-Wunused-function] 200 | static size_t ggml_bitset_size(size_t n) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:163:13: warning: ‘void ggml_set_op_params_f32(ggml_tensor*, uint32_t, float)’ defined but not used [-Wunused-function] 163 | static void ggml_set_op_params_f32(struct ggml_tensor * tensor, uint32_t i, float value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:158:13: warning: ‘void ggml_set_op_params_i32(ggml_tensor*, uint32_t, int32_t)’ defined but not used [-Wunused-function] 158 | static void ggml_set_op_params_i32(struct ggml_tensor * tensor, uint32_t i, int32_t value) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:153:14: warning: ‘float ggml_get_op_params_f32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 153 | static float ggml_get_op_params_f32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:148:16: warning: ‘int32_t ggml_get_op_params_i32(const ggml_tensor*, uint32_t)’ defined but not used [-Wunused-function] 148 | static int32_t ggml_get_op_params_i32(const struct ggml_tensor * tensor, uint32_t i) { | ^~~~~~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:142:13: warning: ‘void ggml_set_op_params(ggml_tensor*, const void*, size_t)’ defined but not used [-Wunused-function] 142 | static void ggml_set_op_params(struct ggml_tensor * tensor, const void * params, size_t params_size) { | ^~~~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:92:13: warning: ‘bool ggml_op_is_empty(ggml_op)’ defined but not used [-Wunused-function] 92 | static bool ggml_op_is_empty(enum ggml_op op) { | ^~~~~~~~~~~~~~~~ /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ml/backend/ggml/ggml/src/ggml-impl.h:77:13: warning: ‘bool ggml_are_same_layout(const ggml_tensor*, const ggml_tensor*)’ defined but not used [-Wunused-function] 77 | static bool ggml_are_same_layout(const struct ggml_tensor * a, const struct ggml_tensor * b) { | ^~~~~~~~~~~~~~~~~~~~ [100%] Linking CXX shared module ../../../../../lib/ollama/libggml-cpu-alderlake.so [100%] Built target ggml-cpu-alderlake + go build go: downloading github.com/spf13/cobra v1.7.0 go: downloading github.com/containerd/console v1.0.3 go: downloading github.com/mattn/go-runewidth v0.0.14 go: downloading github.com/olekukonko/tablewriter v0.0.5 go: downloading golang.org/x/crypto v0.36.0 go: downloading golang.org/x/sync v0.12.0 go: downloading golang.org/x/term v0.30.0 go: downloading golang.org/x/sys v0.31.0 go: downloading github.com/spf13/pflag v1.0.5 go: downloading github.com/rivo/uniseg v0.2.0 go: downloading github.com/google/uuid v1.6.0 go: downloading golang.org/x/text v0.23.0 go: downloading github.com/emirpasic/gods/v2 v2.0.0-alpha go: downloading github.com/gin-contrib/cors v1.7.2 go: downloading github.com/gin-gonic/gin v1.10.0 go: downloading golang.org/x/image v0.22.0 go: downloading github.com/d4l3k/go-bfloat16 v0.0.0-20211005043715-690c3bdd05f1 go: downloading github.com/nlpodyssey/gopickle v0.3.0 go: downloading github.com/pdevine/tensor v0.0.0-20240510204454-f88f4562727c go: downloading gonum.org/v1/gonum v0.15.0 go: downloading github.com/x448/float16 v0.8.4 go: downloading google.golang.org/protobuf v1.34.1 go: downloading github.com/agnivade/levenshtein v1.1.1 go: downloading github.com/gin-contrib/sse v0.1.0 go: downloading github.com/mattn/go-isatty v0.0.20 go: downloading golang.org/x/net v0.38.0 go: downloading github.com/dlclark/regexp2 v1.11.4 go: downloading github.com/apache/arrow/go/arrow v0.0.0-20211112161151-bc219186db40 go: downloading github.com/chewxy/hm v1.0.0 go: downloading github.com/chewxy/math32 v1.11.0 go: downloading github.com/google/flatbuffers v24.3.25+incompatible go: downloading github.com/pkg/errors v0.9.1 go: downloading go4.org/unsafe/assume-no-moving-gc v0.0.0-20231121144256-b99613f794b6 go: downloading gorgonia.org/vecf32 v0.9.0 go: downloading gorgonia.org/vecf64 v0.9.0 go: downloading golang.org/x/exp v0.0.0-20250218142911-aa4b98e5adaa go: downloading github.com/go-playground/validator/v10 v10.20.0 go: downloading github.com/pelletier/go-toml/v2 v2.2.2 go: downloading github.com/ugorji/go/codec v1.2.12 go: downloading gopkg.in/yaml.v3 v3.0.1 go: downloading golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 go: downloading github.com/xtgo/set v1.0.0 go: downloading github.com/gogo/protobuf v1.3.2 go: downloading github.com/golang/protobuf v1.5.4 go: downloading github.com/gabriel-vasile/mimetype v1.4.3 go: downloading github.com/go-playground/universal-translator v0.18.1 go: downloading github.com/leodido/go-urn v1.4.0 go: downloading github.com/go-playground/locales v0.14.1 + RPM_EC=0 ++ jobs -p + exit 0 Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.KeUWFK + umask 022 + cd /builddir/build/BUILD/ollama-0.12.5-build + '[' /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT '!=' / ']' + rm -rf /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT ++ dirname /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT + mkdir -p /builddir/build/BUILD/ollama-0.12.5-build + mkdir /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT + CFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CFLAGS + CXXFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer ' + export CXXFLAGS + FFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FFLAGS + FCFLAGS='-O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Wp,-U_FORTIFY_SOURCE,-D_FORTIFY_SOURCE=3 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -mtls-dialect=gnu2 -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -I/usr/lib64/gfortran/modules ' + export FCFLAGS + VALAFLAGS=-g + export VALAFLAGS + RUSTFLAGS='-Copt-level=3 -Cdebuginfo=2 -Ccodegen-units=1 -Cstrip=none -Cforce-frame-pointers=yes -Clink-arg=-specs=/usr/lib/rpm/redhat/redhat-package-notes --cap-lints=warn' + export RUSTFLAGS + LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,pack-relative-relocs -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 -specs=/usr/lib/rpm/redhat/redhat-package-notes ' + export LDFLAGS + LT_SYS_LIBRARY_PATH=/usr/lib64: + export LT_SYS_LIBRARY_PATH + CC=gcc + export CC + CXX=g++ + export CXX + cd ollama-0.12.5 + install -Dm0755 /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ollama /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT/usr/bin/ollama + install -Dm0644 /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ollamad-main/ollamad.service /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT/usr/lib/systemd/system/ollamad.service + install -Dm0644 /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/ollamad-main/ollamad.conf /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT/etc/ollama/ollamad.conf + /usr/bin/find-debuginfo -j2 --strict-build-id -m -i --build-id-seed 0.12.5-1.fc42 --unique-debug-suffix -0.12.5-1.fc42.x86_64 --unique-debug-src-base ollama-0.12.5-1.fc42.x86_64 --run-dwz --dwz-low-mem-die-limit 10000000 --dwz-max-die-limit 110000000 -S debugsourcefiles.list /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5 find-debuginfo: starting Extracting debug info from 1 files warning: Unsupported auto-load script at offset 0 in section .debug_gdb_scripts of file /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT/usr/bin/ollama. Use `info auto-load python-scripts [REGEXP]' to list them. DWARF-compressing 1 files dwz: ./usr/bin/ollama-0.12.5-1.fc42.x86_64.debug: Found compressed .debug_aranges section, not attempting dwz compression sepdebugcrcfix: Updated 0 CRC32s, 1 CRC32s did match. Creating .debug symlinks for symlinks to ELF files Copying sources found by 'debugedit -l' to /usr/src/debug/ollama-0.12.5-1.fc42.x86_64 find-debuginfo: done + /usr/lib/rpm/check-buildroot + /usr/lib/rpm/redhat/brp-ldconfig + /usr/lib/rpm/brp-compress + /usr/lib/rpm/redhat/brp-strip-lto /usr/bin/strip + /usr/lib/rpm/brp-strip-static-archive /usr/bin/strip + /usr/lib/rpm/check-rpaths + /usr/lib/rpm/redhat/brp-mangle-shebangs + /usr/lib/rpm/brp-remove-la-files + env /usr/lib/rpm/redhat/brp-python-bytecompile '' 1 0 -j2 + /usr/lib/rpm/redhat/brp-python-hardlink + /usr/bin/add-determinism --brp -j2 /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT Scanned 103 directories and 305 files, processed 0 inodes, 0 modified (0 replaced + 0 rewritten), 0 unsupported format, 0 errors Reading /builddir/build/BUILD/ollama-0.12.5-build/SPECPARTS/rpm-debuginfo.specpart Processing files: ollama-0.12.5-1.fc42.x86_64 Executing(%doc): /bin/sh -e /var/tmp/rpm-tmp.F6BJoK + umask 022 + cd /builddir/build/BUILD/ollama-0.12.5-build + cd ollama-0.12.5 + DOCDIR=/builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT/usr/share/doc/ollama + export LC_ALL=C.UTF-8 + LC_ALL=C.UTF-8 + export DOCDIR + /usr/bin/mkdir -p /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT/usr/share/doc/ollama + cp -pr /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/README.md /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT/usr/share/doc/ollama + RPM_EC=0 ++ jobs -p + exit 0 Executing(%license): /bin/sh -e /var/tmp/rpm-tmp.AoUU3N + umask 022 + cd /builddir/build/BUILD/ollama-0.12.5-build + cd ollama-0.12.5 + LICENSEDIR=/builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT/usr/share/licenses/ollama + export LC_ALL=C.UTF-8 + LC_ALL=C.UTF-8 + export LICENSEDIR + /usr/bin/mkdir -p /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT/usr/share/licenses/ollama + cp -pr /builddir/build/BUILD/ollama-0.12.5-build/ollama-0.12.5/LICENSE /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT/usr/share/licenses/ollama + RPM_EC=0 ++ jobs -p + exit 0 Provides: config(ollama) = 0.12.5-1.fc42 ollama = 0.12.5-1.fc42 ollama(x86-64) = 0.12.5-1.fc42 Requires(interp): /bin/sh /bin/sh /bin/sh Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Requires(pre): /usr/bin/getent /usr/sbin/useradd Requires(post): /bin/sh Requires(preun): /bin/sh Requires(postun): /bin/sh Requires: ld-linux-x86-64.so.2()(64bit) ld-linux-x86-64.so.2(GLIBC_2.3)(64bit) libc.so.6()(64bit) libc.so.6(GLIBC_2.14)(64bit) libc.so.6(GLIBC_2.17)(64bit) libc.so.6(GLIBC_2.2.5)(64bit) libc.so.6(GLIBC_2.29)(64bit) libc.so.6(GLIBC_2.3.2)(64bit) libc.so.6(GLIBC_2.32)(64bit) libc.so.6(GLIBC_2.33)(64bit) libc.so.6(GLIBC_2.34)(64bit) libc.so.6(GLIBC_2.38)(64bit) libc.so.6(GLIBC_2.7)(64bit) libgcc_s.so.1()(64bit) libgcc_s.so.1(GCC_3.0)(64bit) libgcc_s.so.1(GCC_3.4)(64bit) libm.so.6()(64bit) libm.so.6(GLIBC_2.2.5)(64bit) libm.so.6(GLIBC_2.27)(64bit) libm.so.6(GLIBC_2.29)(64bit) libresolv.so.2()(64bit) libstdc++.so.6()(64bit) libstdc++.so.6(CXXABI_1.3)(64bit) libstdc++.so.6(CXXABI_1.3.11)(64bit) libstdc++.so.6(CXXABI_1.3.13)(64bit) libstdc++.so.6(CXXABI_1.3.15)(64bit) libstdc++.so.6(CXXABI_1.3.2)(64bit) libstdc++.so.6(CXXABI_1.3.3)(64bit) libstdc++.so.6(CXXABI_1.3.5)(64bit) libstdc++.so.6(CXXABI_1.3.9)(64bit) libstdc++.so.6(GLIBCXX_3.4)(64bit) libstdc++.so.6(GLIBCXX_3.4.11)(64bit) libstdc++.so.6(GLIBCXX_3.4.14)(64bit) libstdc++.so.6(GLIBCXX_3.4.15)(64bit) libstdc++.so.6(GLIBCXX_3.4.17)(64bit) libstdc++.so.6(GLIBCXX_3.4.18)(64bit) libstdc++.so.6(GLIBCXX_3.4.19)(64bit) libstdc++.so.6(GLIBCXX_3.4.20)(64bit) libstdc++.so.6(GLIBCXX_3.4.21)(64bit) libstdc++.so.6(GLIBCXX_3.4.22)(64bit) libstdc++.so.6(GLIBCXX_3.4.25)(64bit) libstdc++.so.6(GLIBCXX_3.4.26)(64bit) libstdc++.so.6(GLIBCXX_3.4.29)(64bit) libstdc++.so.6(GLIBCXX_3.4.30)(64bit) libstdc++.so.6(GLIBCXX_3.4.32)(64bit) libstdc++.so.6(GLIBCXX_3.4.9)(64bit) rtld(GNU_HASH) Processing files: ollama-debugsource-0.12.5-1.fc42.x86_64 Provides: ollama-debugsource = 0.12.5-1.fc42 ollama-debugsource(x86-64) = 0.12.5-1.fc42 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Processing files: ollama-debuginfo-0.12.5-1.fc42.x86_64 Provides: debuginfo(build-id) = 8eb2fe2e028e65578bd81a7040e22769536178c4 ollama-debuginfo = 0.12.5-1.fc42 ollama-debuginfo(x86-64) = 0.12.5-1.fc42 Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1 Recommends: ollama-debugsource(x86-64) = 0.12.5-1.fc42 Checking for unpackaged file(s): /usr/lib/rpm/check-files /builddir/build/BUILD/ollama-0.12.5-build/BUILDROOT Wrote: /builddir/build/RPMS/ollama-0.12.5-1.fc42.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-debuginfo-0.12.5-1.fc42.x86_64.rpm Wrote: /builddir/build/RPMS/ollama-debugsource-0.12.5-1.fc42.x86_64.rpm Executing(rmbuild): /bin/sh -e /var/tmp/rpm-tmp.eKAhYf + umask 022 + cd /builddir/build/BUILD/ollama-0.12.5-build + test -d /builddir/build/BUILD/ollama-0.12.5-build + /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w /builddir/build/BUILD/ollama-0.12.5-build + rm -rf /builddir/build/BUILD/ollama-0.12.5-build + RPM_EC=0 ++ jobs -p + exit 0 Finish: rpmbuild ollama-0.12.5-1.fc42.src.rpm Finish: build phase for ollama-0.12.5-1.fc42.src.rpm INFO: chroot_scan: 1 files copied to /var/lib/copr-rpmbuild/results/chroot_scan INFO: /var/lib/mock/fedora-42-x86_64-1760542611.653638/root/var/log/dnf5.log INFO: chroot_scan: creating tarball /var/lib/copr-rpmbuild/results/chroot_scan.tar.gz /bin/tar: Removing leading `/' from member names INFO: Done(/var/lib/copr-rpmbuild/results/ollama-0.12.5-1.fc42.src.rpm) Config(child) 15 minutes 24 seconds INFO: Results and/or logs in: /var/lib/copr-rpmbuild/results INFO: Cleaning up build root ('cleanup_on_success=True') Start: clean chroot INFO: unmounting tmpfs. Finish: clean chroot Finish: run Running FedoraReview tool Running: fedora-review --no-colors --prebuilt --rpm-spec --name ollama --mock-config /var/lib/copr-rpmbuild/results/configs/child.cfg cmd: ['fedora-review', '--no-colors', '--prebuilt', '--rpm-spec', '--name', 'ollama', '--mock-config', '/var/lib/copr-rpmbuild/results/configs/child.cfg'] cwd: /var/lib/copr-rpmbuild/results rc: 0 stdout: Cache directory "/var/lib/copr-rpmbuild/results/cache/libdnf5" does not exist. Nothing to clean. Review template in: /var/lib/copr-rpmbuild/results/ollama/review.txt fedora-review is automated tool, but *YOU* are responsible for manually reviewing the results and finishing the review. Do not just copy-paste the results without understanding them. stderr: INFO: Processing local files: ollama INFO: Getting .spec and .srpm Urls from : Local files in /var/lib/copr-rpmbuild/results INFO: --> SRPM url: file:///var/lib/copr-rpmbuild/results/ollama-0.12.5-1.fc42.src.rpm INFO: Using review directory: /var/lib/copr-rpmbuild/results/ollama INFO: Downloading (Source1): https://github.com/mwprado/ollamad/archive/refs/heads/main.zip INFO: Downloading (Source0): https://github.com/ollama/ollama/archive/refs/tags/v0.12.5.zip INFO: Running checks and generating report INFO: Installing built package(s) INFO: Reading configuration from /etc/mock/site-defaults.cfg INFO: Reading configuration from /etc/mock/chroot-aliases.cfg INFO: Reading configuration from /var/lib/copr-rpmbuild/results/configs/child.cfg INFO: WARNING: Probably non-rawhide buildroot used. Rawhide should be used for most package reviews INFO: Active plugins: Shell-api, C/C++, Generic Updating and loading repositories: Repositories loaded. Updating and loading repositories: Repositories loaded. INFO: ExclusiveArch dependency checking disabled, enable with EXARCH flag Cache directory "/var/lib/copr-rpmbuild/results/cache/libdnf5" does not exist. Nothing to clean. Review template in: /var/lib/copr-rpmbuild/results/ollama/review.txt fedora-review is automated tool, but *YOU* are responsible for manually reviewing the results and finishing the review. Do not just copy-paste the results without understanding them. Moving the results into `fedora-review' directory. Review template in: /var/lib/copr-rpmbuild/results/fedora-review/review.txt FedoraReview finished Running RPMResults tool Package info: { "packages": [ { "name": "ollama-debuginfo", "epoch": null, "version": "0.12.5", "release": "1.fc42", "arch": "x86_64" }, { "name": "ollama", "epoch": null, "version": "0.12.5", "release": "1.fc42", "arch": "src" }, { "name": "ollama", "epoch": null, "version": "0.12.5", "release": "1.fc42", "arch": "x86_64" }, { "name": "ollama-debugsource", "epoch": null, "version": "0.12.5", "release": "1.fc42", "arch": "x86_64" } ] } RPMResults finished