[2025-12-08 16:43:39,576][ INFO][PID:3823819] Marking build as starting [2025-12-08 16:43:39,779][ INFO][PID:3823819] Checking for cancel request [2025-12-08 16:43:39,780][ INFO][PID:3823819] VM allocation process starts [2025-12-08 16:43:39,800][ INFO][PID:3823819] Trying to allocate VM: ResallocHost, ticket_id=220674, requested_tags=['copr_builder', 'arch_ppc64le', 'arch_power9'] [2025-12-08 16:43:42,814][ INFO][PID:3823819] Allocated host ResallocHost, ticket_id=220674, hostname=2620:52:6:1161:dead:beef:cafe:c108, name=vmhost_p09_02_prod_00220063_20251208_161006, requested_tags=['copr_builder', 'arch_ppc64le', 'arch_power9'] [2025-12-08 16:43:42,814][ INFO][PID:3823819] Allocating ssh connection to builder [2025-12-08 16:43:42,815][ INFO][PID:3823819] Checking that builder machine is OK [2025-12-08 16:44:42,881][ ERROR][PID:3823819] SSH connection lost on #1 attempt, let's retry after 10s, Connection broke: OUT: ERR: ssh: connect to host 2620:52:6:1161:dead:beef:cafe:c108 port 22: Connection timed out [2025-12-08 16:44:53,392][ ERROR][PID:3823819] Re-try request for task on 'ResallocHost, ticket_id=220674, hostname=2620:52:6:1161:dead:beef:cafe:c108, name=vmhost_p09_02_prod_00220063_20251208_161006, requested_tags=['copr_builder', 'arch_ppc64le', 'arch_power9']': Unable to finish after 1 SSH attempts [2025-12-08 16:44:53,393][ INFO][PID:3823819] Releasing VM back to pool [2025-12-08 16:44:53,402][ INFO][PID:3823819] Retry #1 (on other host) [2025-12-08 16:44:53,403][ INFO][PID:3823819] Checking for cancel request [2025-12-08 16:44:53,404][ INFO][PID:3823819] VM allocation process starts [2025-12-08 16:44:53,415][ INFO][PID:3823819] Trying to allocate VM: ResallocHost, ticket_id=220686, requested_tags=['copr_builder', 'arch_ppc64le', 'arch_power9'] [2025-12-08 16:44:56,429][ INFO][PID:3823819] Allocated host ResallocHost, ticket_id=220686, hostname=2620:52:6:1161:dead:beef:cafe:c10a, name=vmhost_p09_02_prod_00220563_20251208_162237, requested_tags=['copr_builder', 'arch_ppc64le', 'arch_power9'] [2025-12-08 16:44:56,431][ INFO][PID:3823819] Allocating ssh connection to builder [2025-12-08 16:44:56,432][ INFO][PID:3823819] Checking that builder machine is OK [2025-12-08 16:45:56,506][ ERROR][PID:3823819] SSH connection lost on #1 attempt, let's retry after 10s, Connection broke: OUT: ERR: ssh: connect to host 2620:52:6:1161:dead:beef:cafe:c10a port 22: Connection timed out [2025-12-08 16:46:06,524][ ERROR][PID:3823819] Re-try request for task on 'ResallocHost, ticket_id=220686, hostname=2620:52:6:1161:dead:beef:cafe:c10a, name=vmhost_p09_02_prod_00220563_20251208_162237, requested_tags=['copr_builder', 'arch_ppc64le', 'arch_power9']': Unable to finish after 1 SSH attempts [2025-12-08 16:46:06,526][ INFO][PID:3823819] Releasing VM back to pool [2025-12-08 16:46:06,542][ INFO][PID:3823819] Retry #2 (on other host) [2025-12-08 16:46:06,543][ INFO][PID:3823819] Checking for cancel request [2025-12-08 16:46:06,543][ INFO][PID:3823819] VM allocation process starts [2025-12-08 16:46:06,562][ INFO][PID:3823819] Trying to allocate VM: ResallocHost, ticket_id=220724, requested_tags=['copr_builder', 'arch_ppc64le', 'arch_power9'] [2025-12-08 16:46:09,609][ INFO][PID:3823819] Allocated host ResallocHost, ticket_id=220724, hostname=2620:52:6:1161:dead:beef:cafe:c112, name=vmhost_p09_02_prod_00221027_20251208_163515, requested_tags=['copr_builder', 'arch_ppc64le', 'arch_power9'] [2025-12-08 16:46:10,215][ INFO][PID:3823819] Allocating ssh connection to builder [2025-12-08 16:46:10,215][ INFO][PID:3823819] Checking that builder machine is OK [2025-12-08 16:47:10,291][ ERROR][PID:3823819] SSH connection lost on #1 attempt, let's retry after 10s, Connection broke: OUT: ERR: ssh: connect to host 2620:52:6:1161:dead:beef:cafe:c112 port 22: Connection timed out [2025-12-08 16:47:20,292][ ERROR][PID:3823819] Re-try request for task on 'ResallocHost, ticket_id=220724, hostname=2620:52:6:1161:dead:beef:cafe:c112, name=vmhost_p09_02_prod_00221027_20251208_163515, requested_tags=['copr_builder', 'arch_ppc64le', 'arch_power9']': Unable to finish after 1 SSH attempts [2025-12-08 16:47:20,293][ INFO][PID:3823819] Releasing VM back to pool [2025-12-08 16:47:20,303][ ERROR][PID:3823819] Backend process error: Three host tried without success: {'2620:52:6:1161:dead:beef:cafe:c108', '2620:52:6:1161:dead:beef:cafe:c112', '2620:52:6:1161:dead:beef:cafe:c10a'} [2025-12-08 16:47:20,303][WARNING][PID:3823819] Switching not-finished job state to 'failed' [2025-12-08 16:47:20,304][ INFO][PID:3823819] Worker failed build, took 220.7984983921051 [2025-12-08 16:47:20,304][ INFO][PID:3823819] Sending build state back to frontend: { "builds": [ { "timeout": 18000, "frontend_base_url": "https://copr.fedorainfracloud.org", "memory_reqs": 2048, "enable_net": true, "project_owner": "rhcontainerbot", "project_name": "podman-next", "project_dirname": "podman-next", "submitter": "packit", "ended_on": 1765212440.3038921, "started_on": 1765212219.5053937, "submitted_on": null, "status": 0, "chroot": "centos-stream-9-ppc64le", "arch": "ppc64le", "buildroot_pkgs": [], "task_id": "9887179-centos-stream-9-ppc64le", "build_id": 9887179, "package_name": "buildah", "package_version": "102:1.42.0-1.20251208162510585667.main.79.g1bd4b6d57", "git_repo": "https://copr-dist-git.fedorainfracloud.org/git/rhcontainerbot/podman-next/buildah", "git_hash": "e3cd43c65a99cc512ddf7995c29d7549032ac965", "git_branch": null, "source_type": null, "source_json": null, "pkg_name": null, "pkg_main_version": null, "pkg_epoch": null, "pkg_release": null, "srpm_url": null, "uses_devel_repo": false, "sandbox": "rhcontainerbot/podman-next--packit", "results": null, "appstream": false, "allow_user_ssh": false, "ssh_public_keys": null, "storage": null, "repos": [ { "baseurl": "https://download.copr.fedorainfracloud.org/results/rhcontainerbot/podman-next/centos-stream-9-ppc64le/", "id": "copr_base", "name": "Copr repository", "priority": null }, { "baseurl": "https://dl.fedoraproject.org/pub/epel/9/Everything/$basearch/", "id": "https_dl_fedoraproject_org_pub_epel_9_Everything_basearch", "name": "Additional repo https_dl_fedoraproject_org_pub_epel_9_Everything_basearch" } ], "background": false, "fedora_review": false, "isolation": "default", "repo_priority": null, "tags": [ "arch_ppc64le", "arch_power9" ], "with_opts": [], "without_opts": [], "destdir": "/var/lib/copr/public_html/results/rhcontainerbot/podman-next", "results_repo_url": "https://download.copr.fedorainfracloud.org/results/rhcontainerbot/podman-next", "result_dir": "09887179-buildah", "built_packages": "", "id": 9887179, "mockchain_macros": { "copr_username": "rhcontainerbot", "copr_projectname": "podman-next", "vendor": "Fedora Project COPR (rhcontainerbot/podman-next)" } } ] } [2025-12-08 16:47:20,354][ INFO][PID:3823819] Sending fedora-messaging bus message in build.end [2025-12-08 16:47:20,855][ INFO][PID:3823819] Compressing /var/lib/copr/public_html/results/rhcontainerbot/podman-next/centos-stream-9-ppc64le/09887179-buildah/builder-live.log by gzip [2025-12-08 16:47:20,856][ INFO][PID:3823819] Running command 'gzip /var/lib/copr/public_html/results/rhcontainerbot/podman-next/centos-stream-9-ppc64le/09887179-buildah/builder-live.log' as PID 3843110 [2025-12-08 16:47:20,857][ INFO][PID:3823819] Finished after 0 seconds with exit code 1 (gzip /var/lib/copr/public_html/results/rhcontainerbot/podman-next/centos-stream-9-ppc64le/09887179-buildah/builder-live.log) stdout: stderr: gzip: /var/lib/copr/public_html/results/rhcontainerbot/podman-next/centos-stream-9-ppc64le/09887179-buildah/builder-live.log: No such file or directory [2025-12-08 16:47:20,857][ ERROR][PID:3823819] Unable to compress file /var/lib/copr/public_html/results/rhcontainerbot/podman-next/centos-stream-9-ppc64le/09887179-buildah/builder-live.log [2025-12-08 16:47:20,858][ INFO][PID:3823819] Compressing /var/lib/copr/public_html/results/rhcontainerbot/podman-next/centos-stream-9-ppc64le/09887179-buildah/backend.log by gzip [2025-12-08 16:47:20,858][ INFO][PID:3823819] Running command 'gzip /var/lib/copr/public_html/results/rhcontainerbot/podman-next/centos-stream-9-ppc64le/09887179-buildah/backend.log' as PID 3843111