>>> ollama: Building community/ollama 0.10.1-r0 (using abuild 3.15.0-r2) started Fri, 01 Aug 2025 11:23:20 +0000 >>> ollama: Validating /home/buildozer/aports/community/ollama/APKBUILD... >>> ollama: Analyzing dependencies... >>> ollama: Installing for build: build-base go>=1.24.0 cmake ninja patchelf ( 1/10) Installing go (1.24.5-r1) ( 2/10) Installing libbz2 (1.0.8-r6) ( 3/10) Installing xz-libs (5.8.1-r0) ( 4/10) Installing libarchive (3.8.1-r0) ( 5/10) Installing rhash-libs (1.4.6-r0) ( 6/10) Installing libuv (1.51.0-r0) ( 7/10) Installing cmake (4.0.3-r0) ( 8/10) Installing samurai (1.2-r7) ( 9/10) Installing patchelf (0.18.0-r3) (10/10) Installing .makedepends-ollama (20250801.112320) busybox-1.37.0-r21.trigger: Executing script... OK: 641 MiB in 118 packages >>> ollama: Cleaning up srcdir >>> ollama: Cleaning up pkgdir >>> ollama: Cleaning up tmpdir >>> ollama: Fetching https://distfiles.alpinelinux.org/distfiles/edge//ollama-0.10.1.tar.gz % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (22) The requested URL returned error: 404 >>> ollama: Fetching ollama-0.10.1.tar.gz::https://github.com/ollama/ollama/archive/refs/tags/v0.10.1.tar.gz % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 2685k 0 2685k 0 0 1636k 0 --:--:-- 0:00:01 --:--:-- 2729k 100 9218k 0 9218k 0 0 3551k 0 --:--:-- 0:00:02 --:--:-- 4754k 100 9.9M 0 9.9M 0 0 3745k 0 --:--:-- 0:00:02 --:--:-- 4940k >>> ollama: Fetching https://distfiles.alpinelinux.org/distfiles/edge//ollama-0.10.1.tar.gz >>> ollama: Checking sha512sums... ollama-0.10.1.tar.gz: OK >>> ollama: Unpacking /var/cache/distfiles/edge/ollama-0.10.1.tar.gz... -- The C compiler identification is GNU 15.1.1 -- The CXX compiler identification is GNU 15.1.1 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF -- CMAKE_SYSTEM_PROCESSOR: s390x -- Including CPU backend -- s390x detected -- z15 target -- Adding CPU backend variant ggml-cpu-x64: -march=z15 -- s390x detected -- z15 target -- Adding CPU backend variant ggml-cpu-sse42: -march=z15 -- s390x detected -- z15 target -- Adding CPU backend variant ggml-cpu-sandybridge: -march=z15 -- s390x detected -- z15 target -- Adding CPU backend variant ggml-cpu-haswell: -march=z15 -- s390x detected -- z15 target -- Adding CPU backend variant ggml-cpu-skylakex: -march=z15 -- s390x detected -- z15 target -- Adding CPU backend variant ggml-cpu-icelake: -march=z15 -- s390x detected -- z15 target -- Adding CPU backend variant ggml-cpu-alderlake: -march=z15 -- Looking for a CUDA compiler -- Looking for a CUDA compiler - NOTFOUND -- Looking for a HIP compiler -- Looking for a HIP compiler - NOTFOUND -- Configuring done (0.3s) -- Generating done (0.0s) CMake Warning: Manually-specified variables were not used by the project: CMAKE_INSTALL_LIBDIR -- Build files have been written to: /home/buildozer/aports/community/ollama/src/ollama-0.10.1/build [1/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/llamafile/sgemm.cpp.o [2/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ops.cpp.o [3/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/vec.cpp.o [4/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/unary-ops.cpp.o [5/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/binary-ops.cpp.o [6/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/mmq.cpp.o [7/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/amx/amx.cpp.o [8/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu-traits.cpp.o [9/113] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu-quants.c.o [10/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu-hbm.cpp.o [11/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu-aarch64.cpp.o [12/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.cpp.o [13/113] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake.dir/ggml-cpu/ggml-cpu.c.o [14/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-alderlake-feats.dir/ggml-cpu/cpu-feats-x86.cpp.o [15/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/llamafile/sgemm.cpp.o [16/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ops.cpp.o [17/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/vec.cpp.o [18/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/unary-ops.cpp.o [19/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/binary-ops.cpp.o [20/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/mmq.cpp.o [21/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/amx/amx.cpp.o [22/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu-traits.cpp.o [23/113] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu-quants.c.o [24/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu-hbm.cpp.o [25/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu-aarch64.cpp.o [26/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.cpp.o [27/113] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake.dir/ggml-cpu/ggml-cpu.c.o [28/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-icelake-feats.dir/ggml-cpu/cpu-feats-x86.cpp.o [29/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/llamafile/sgemm.cpp.o [30/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ops.cpp.o [31/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/vec.cpp.o [32/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/unary-ops.cpp.o [33/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/binary-ops.cpp.o [34/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/mmq.cpp.o [35/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/amx/amx.cpp.o [36/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu-traits.cpp.o [37/113] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu-quants.c.o [38/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu-hbm.cpp.o [39/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu-aarch64.cpp.o [40/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.cpp.o [41/113] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex.dir/ggml-cpu/ggml-cpu.c.o [42/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-skylakex-feats.dir/ggml-cpu/cpu-feats-x86.cpp.o [43/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/llamafile/sgemm.cpp.o [44/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ops.cpp.o [45/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/vec.cpp.o [46/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/unary-ops.cpp.o [47/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/binary-ops.cpp.o [48/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/mmq.cpp.o [49/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/amx/amx.cpp.o [50/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu-traits.cpp.o [51/113] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu-quants.c.o [52/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu-hbm.cpp.o [53/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu-aarch64.cpp.o [54/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.cpp.o [55/113] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell.dir/ggml-cpu/ggml-cpu.c.o [56/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-haswell-feats.dir/ggml-cpu/cpu-feats-x86.cpp.o [57/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/llamafile/sgemm.cpp.o [58/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ops.cpp.o [59/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/vec.cpp.o [60/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/unary-ops.cpp.o [61/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/binary-ops.cpp.o [62/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/mmq.cpp.o [63/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/amx/amx.cpp.o [64/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu-traits.cpp.o [65/113] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu-quants.c.o [66/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu-hbm.cpp.o [67/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu-aarch64.cpp.o [68/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.cpp.o [69/113] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge.dir/ggml-cpu/ggml-cpu.c.o [70/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sandybridge-feats.dir/ggml-cpu/cpu-feats-x86.cpp.o [71/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/llamafile/sgemm.cpp.o [72/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ops.cpp.o [73/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/vec.cpp.o [74/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/unary-ops.cpp.o [75/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/binary-ops.cpp.o [76/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/mmq.cpp.o [77/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/amx/amx.cpp.o [78/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu-traits.cpp.o [79/113] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu-quants.c.o [80/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu-hbm.cpp.o [81/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu-aarch64.cpp.o [82/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.cpp.o [83/113] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42.dir/ggml-cpu/ggml-cpu.c.o [84/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-sse42-feats.dir/ggml-cpu/cpu-feats-x86.cpp.o [85/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/llamafile/sgemm.cpp.o [86/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ops.cpp.o [87/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/vec.cpp.o [88/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/unary-ops.cpp.o [89/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/binary-ops.cpp.o [90/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/mmq.cpp.o [91/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/amx/amx.cpp.o [92/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu-traits.cpp.o [93/113] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu-quants.c.o [94/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu-hbm.cpp.o [95/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu-aarch64.cpp.o [96/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.cpp.o [97/113] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64.dir/ggml-cpu/ggml-cpu.c.o [98/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-cpu-x64-feats.dir/ggml-cpu/cpu-feats-x86.cpp.o [99/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/gguf.cpp.o [100/113] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-quants.c.o [101/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-threading.cpp.o [102/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-opt.cpp.o [103/113] Building CXX object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-backend.cpp.o [104/113] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml-alloc.c.o [105/113] Building C object ml/backend/ggml/ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o [106/113] Linking CXX shared library lib/ollama/libggml-base.so [107/113] Linking CXX shared module lib/ollama/libggml-cpu-alderlake.so [108/113] Linking CXX shared module lib/ollama/libggml-cpu-icelake.so [109/113] Linking CXX shared module lib/ollama/libggml-cpu-skylakex.so [110/113] Linking CXX shared module lib/ollama/libggml-cpu-haswell.so [111/113] Linking CXX shared module lib/ollama/libggml-cpu-sandybridge.so [112/113] Linking CXX shared module lib/ollama/libggml-cpu-sse42.so [113/113] Linking CXX shared module lib/ollama/libggml-cpu-x64.so # github.com/ollama/ollama/llama/llama.cpp/src In file included from /usr/include/c++/15.1.1/s390x-alpine-linux-musl/bits/c++allocator.h:33, from /usr/include/c++/15.1.1/bits/allocator.h:46, from /usr/include/c++/15.1.1/string:45, from llama-vocab.h:5, from llama-vocab.cpp:1: In member function 'void std::__new_allocator<_Tp>::deallocate(_Tp*, size_type) [with _Tp = std::__cxx11::basic_string]', inlined from 'static void std::allocator_traits >::deallocate(allocator_type&, pointer, size_type) [with _Tp = std::__cxx11::basic_string]' at /usr/include/c++/15.1.1/bits/alloc_traits.h:649:23, inlined from 'void std::_Vector_base<_Tp, _Alloc>::_M_deallocate(pointer, std::size_t) [with _Tp = std::__cxx11::basic_string; _Alloc = std::allocator >]' at /usr/include/c++/15.1.1/bits/stl_vector.h:396:19, inlined from 'void std::_Vector_base<_Tp, _Alloc>::_M_deallocate(pointer, std::size_t) [with _Tp = std::__cxx11::basic_string; _Alloc = std::allocator >]' at /usr/include/c++/15.1.1/bits/stl_vector.h:392:7, inlined from 'std::_Vector_base<_Tp, _Alloc>::~_Vector_base() [with _Tp = std::__cxx11::basic_string; _Alloc = std::allocator >]' at /usr/include/c++/15.1.1/bits/stl_vector.h:375:15, inlined from 'std::vector<_Tp, _Alloc>::~vector() [with _Tp = std::__cxx11::basic_string; _Alloc = std::allocator >]' at /usr/include/c++/15.1.1/bits/stl_vector.h:805:7, inlined from 'void llama_vocab::impl::load(llama_model_loader&, const LLM_KV&)' at llama-vocab.cpp:2085:26: /usr/include/c++/15.1.1/bits/new_allocator.h:172:66: warning: 'void operator delete(void*, std::size_t)' called on pointer '__result' with nonzero offset 32 [-Wfree-nonheap-object] 172 | _GLIBCXX_OPERATOR_DELETE(_GLIBCXX_SIZED_DEALLOC(__p, __n)); | ^ llama-vocab.cpp: In member function 'void llama_vocab::impl::load(llama_model_loader&, const LLM_KV&)': llama-vocab.cpp:1372:6: note: declared here 1372 | void llama_vocab::impl::load(llama_model_loader & ml, const LLM_KV & kv) { | ^~~~~~~~~~~ In member function 'void std::__new_allocator<_Tp>::deallocate(_Tp*, size_type) [with _Tp = std::__cxx11::basic_string]', inlined from 'static void std::allocator_traits >::deallocate(allocator_type&, pointer, size_type) [with _Tp = std::__cxx11::basic_string]' at /usr/include/c++/15.1.1/bits/alloc_traits.h:649:23, inlined from 'void std::_Vector_base<_Tp, _Alloc>::_M_deallocate(pointer, std::size_t) [with _Tp = std::__cxx11::basic_string; _Alloc = std::allocator >]' at /usr/include/c++/15.1.1/bits/stl_vector.h:396:19, inlined from 'void std::_Vector_base<_Tp, _Alloc>::_M_deallocate(pointer, std::size_t) [with _Tp = std::__cxx11::basic_string; _Alloc = std::allocator >]' at /usr/include/c++/15.1.1/bits/stl_vector.h:392:7, inlined from 'std::_Vector_base<_Tp, _Alloc>::~_Vector_base() [with _Tp = std::__cxx11::basic_string; _Alloc = std::allocator >]' at /usr/include/c++/15.1.1/bits/stl_vector.h:375:15, inlined from 'std::vector<_Tp, _Alloc>::~vector() [with _Tp = std::__cxx11::basic_string; _Alloc = std::allocator >]' at /usr/include/c++/15.1.1/bits/stl_vector.h:805:7, inlined from 'void llama_vocab::impl::load(llama_model_loader&, const LLM_KV&)' at llama-vocab.cpp:2087:33: /usr/include/c++/15.1.1/bits/new_allocator.h:172:66: warning: 'void operator delete(void*, std::size_t)' called on pointer '__result' with nonzero offset 32 [-Wfree-nonheap-object] 172 | _GLIBCXX_OPERATOR_DELETE(_GLIBCXX_SIZED_DEALLOC(__p, __n)); | ^ llama-vocab.cpp: In member function 'void llama_vocab::impl::load(llama_model_loader&, const LLM_KV&)': llama-vocab.cpp:1372:6: note: declared here 1372 | void llama_vocab::impl::load(llama_model_loader & ml, const LLM_KV & kv) { | ^~~~~~~~~~~ # github.com/ollama/ollama/llama/llama.cpp/src unicode.cpp: In function 'std::wstring unicode_wstring_from_utf8(const std::string&)': unicode.cpp:229:10: warning: 'template class std::__cxx11::wstring_convert' is deprecated [-Wdeprecated-declarations] 229 | std::wstring_convert> conv; | ^~~~~~~~~~~~~~~ In file included from /usr/include/c++/15.1.1/locale:47, from unicode.cpp:18: /usr/include/c++/15.1.1/bits/locale_conv.h:262:33: note: declared here 262 | class _GLIBCXX17_DEPRECATED wstring_convert | ^~~~~~~~~~~~~~~ # github.com/ollama/ollama/llama/llama.cpp/src In file included from /usr/include/c++/15.1.1/s390x-alpine-linux-musl/bits/c++allocator.h:33, from /usr/include/c++/15.1.1/bits/allocator.h:46, from /usr/include/c++/15.1.1/string:45, from llama-vocab.h:5, from llama-vocab.cpp:1: In member function 'void std::__new_allocator<_Tp>::deallocate(_Tp*, size_type) [with _Tp = std::__cxx11::basic_string]', inlined from 'static void std::allocator_traits >::deallocate(allocator_type&, pointer, size_type) [with _Tp = std::__cxx11::basic_string]' at /usr/include/c++/15.1.1/bits/alloc_traits.h:649:23, inlined from 'void std::_Vector_base<_Tp, _Alloc>::_M_deallocate(pointer, std::size_t) [with _Tp = std::__cxx11::basic_string; _Alloc = std::allocator >]' at /usr/include/c++/15.1.1/bits/stl_vector.h:396:19, inlined from 'void std::_Vector_base<_Tp, _Alloc>::_M_deallocate(pointer, std::size_t) [with _Tp = std::__cxx11::basic_string; _Alloc = std::allocator >]' at /usr/include/c++/15.1.1/bits/stl_vector.h:392:7, inlined from 'std::_Vector_base<_Tp, _Alloc>::~_Vector_base() [with _Tp = std::__cxx11::basic_string; _Alloc = std::allocator >]' at /usr/include/c++/15.1.1/bits/stl_vector.h:375:15, inlined from 'std::vector<_Tp, _Alloc>::~vector() [with _Tp = std::__cxx11::basic_string; _Alloc = std::allocator >]' at /usr/include/c++/15.1.1/bits/stl_vector.h:805:7, inlined from 'void llama_vocab::impl::load(llama_model_loader&, const LLM_KV&)' at llama-vocab.cpp:2085:26: /usr/include/c++/15.1.1/bits/new_allocator.h:172:66: warning: 'void operator delete(void*, std::size_t)' called on pointer '__result' with nonzero offset 32 [-Wfree-nonheap-object] 172 | _GLIBCXX_OPERATOR_DELETE(_GLIBCXX_SIZED_DEALLOC(__p, __n)); | ^ llama-vocab.cpp: In member function 'void llama_vocab::impl::load(llama_model_loader&, const LLM_KV&)': llama-vocab.cpp:1372:6: note: declared here 1372 | void llama_vocab::impl::load(llama_model_loader & ml, const LLM_KV & kv) { | ^~~~~~~~~~~ In member function 'void std::__new_allocator<_Tp>::deallocate(_Tp*, size_type) [with _Tp = std::__cxx11::basic_string]', inlined from 'static void std::allocator_traits >::deallocate(allocator_type&, pointer, size_type) [with _Tp = std::__cxx11::basic_string]' at /usr/include/c++/15.1.1/bits/alloc_traits.h:649:23, inlined from 'void std::_Vector_base<_Tp, _Alloc>::_M_deallocate(pointer, std::size_t) [with _Tp = std::__cxx11::basic_string; _Alloc = std::allocator >]' at /usr/include/c++/15.1.1/bits/stl_vector.h:396:19, inlined from 'void std::_Vector_base<_Tp, _Alloc>::_M_deallocate(pointer, std::size_t) [with _Tp = std::__cxx11::basic_string; _Alloc = std::allocator >]' at /usr/include/c++/15.1.1/bits/stl_vector.h:392:7, inlined from 'std::_Vector_base<_Tp, _Alloc>::~_Vector_base() [with _Tp = std::__cxx11::basic_string; _Alloc = std::allocator >]' at /usr/include/c++/15.1.1/bits/stl_vector.h:375:15, inlined from 'std::vector<_Tp, _Alloc>::~vector() [with _Tp = std::__cxx11::basic_string; _Alloc = std::allocator >]' at /usr/include/c++/15.1.1/bits/stl_vector.h:805:7, inlined from 'void llama_vocab::impl::load(llama_model_loader&, const LLM_KV&)' at llama-vocab.cpp:2087:33: /usr/include/c++/15.1.1/bits/new_allocator.h:172:66: warning: 'void operator delete(void*, std::size_t)' called on pointer '__result' with nonzero offset 32 [-Wfree-nonheap-object] 172 | _GLIBCXX_OPERATOR_DELETE(_GLIBCXX_SIZED_DEALLOC(__p, __n)); | ^ llama-vocab.cpp: In member function 'void llama_vocab::impl::load(llama_model_loader&, const LLM_KV&)': llama-vocab.cpp:1372:6: note: declared here 1372 | void llama_vocab::impl::load(llama_model_loader & ml, const LLM_KV & kv) { | ^~~~~~~~~~~ # github.com/ollama/ollama/llama/llama.cpp/src unicode.cpp: In function 'std::wstring unicode_wstring_from_utf8(const std::string&)': unicode.cpp:229:10: warning: 'template class std::__cxx11::wstring_convert' is deprecated [-Wdeprecated-declarations] 229 | std::wstring_convert> conv; | ^~~~~~~~~~~~~~~ In file included from /usr/include/c++/15.1.1/locale:47, from unicode.cpp:18: /usr/include/c++/15.1.1/bits/locale_conv.h:262:33: note: declared here 262 | class _GLIBCXX17_DEPRECATED wstring_convert | ^~~~~~~~~~~~~~~ ? github.com/ollama/ollama [no test files] ok github.com/ollama/ollama/api 0.008s ? github.com/ollama/ollama/api/examples/chat [no test files] ? github.com/ollama/ollama/api/examples/generate [no test files] ? github.com/ollama/ollama/api/examples/generate-streaming [no test files] ? github.com/ollama/ollama/api/examples/multimodal [no test files] ? github.com/ollama/ollama/api/examples/pull-progress [no test files] ? github.com/ollama/ollama/app [no test files] ? github.com/ollama/ollama/app/assets [no test files] ok github.com/ollama/ollama/app/lifecycle 0.012s ? github.com/ollama/ollama/app/store [no test files] ? github.com/ollama/ollama/app/tray [no test files] ? github.com/ollama/ollama/app/tray/commontray [no test files] ? github.com/ollama/ollama/auth [no test files] ok github.com/ollama/ollama/cmd 0.032s ? github.com/ollama/ollama/cmd/runner [no test files] ok github.com/ollama/ollama/convert 0.019s ? github.com/ollama/ollama/convert/sentencepiece [no test files] ok github.com/ollama/ollama/discover 0.010s ok github.com/ollama/ollama/envconfig 0.008s ok github.com/ollama/ollama/format 0.003s ? github.com/ollama/ollama/fs [no test files] ok github.com/ollama/ollama/fs/ggml 0.004s ok github.com/ollama/ollama/fs/gguf 0.004s ok github.com/ollama/ollama/fs/util/bufioutil 0.002s ok github.com/ollama/ollama/kvcache 0.002s ok github.com/ollama/ollama/llama 0.005s ? github.com/ollama/ollama/llama/llama.cpp/common [no test files] ? github.com/ollama/ollama/llama/llama.cpp/src [no test files] ? github.com/ollama/ollama/llama/llama.cpp/tools/mtmd [no test files] ok github.com/ollama/ollama/llm 0.008s ? github.com/ollama/ollama/logutil [no test files] ? github.com/ollama/ollama/ml [no test files] ? github.com/ollama/ollama/ml/backend [no test files] ? github.com/ollama/ollama/ml/backend/ggml [no test files] ? github.com/ollama/ollama/ml/backend/ggml/ggml/src [no test files] ? github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu [no test files] ? github.com/ollama/ollama/ml/backend/ggml/ggml/src/ggml-cpu/llamafile [no test files] ? github.com/ollama/ollama/ml/nn [no test files] ? github.com/ollama/ollama/ml/nn/fast [no test files] ? github.com/ollama/ollama/ml/nn/rope [no test files] ok github.com/ollama/ollama/model 0.335s ok github.com/ollama/ollama/model/imageproc 0.025s ? github.com/ollama/ollama/model/input [no test files] ? github.com/ollama/ollama/model/models [no test files] ? github.com/ollama/ollama/model/models/gemma2 [no test files] ? github.com/ollama/ollama/model/models/gemma3 [no test files] ? github.com/ollama/ollama/model/models/gemma3n [no test files] ? github.com/ollama/ollama/model/models/llama [no test files] ok github.com/ollama/ollama/model/models/llama4 0.026s ? github.com/ollama/ollama/model/models/mistral3 [no test files] ok github.com/ollama/ollama/model/models/mllama 0.692s ? github.com/ollama/ollama/model/models/qwen2 [no test files] ? github.com/ollama/ollama/model/models/qwen25vl [no test files] ? github.com/ollama/ollama/model/models/qwen3 [no test files] ok github.com/ollama/ollama/openai 0.012s ok github.com/ollama/ollama/parser 0.014s ? github.com/ollama/ollama/progress [no test files] ? github.com/ollama/ollama/readline [no test files] ? github.com/ollama/ollama/runner [no test files] ok github.com/ollama/ollama/runner/common 0.006s ok github.com/ollama/ollama/runner/llamarunner 0.006s ok github.com/ollama/ollama/runner/ollamarunner 0.016s ok github.com/ollama/ollama/sample 0.265s ok github.com/ollama/ollama/server 0.410s ok github.com/ollama/ollama/server/internal/cache/blob 0.007s ok github.com/ollama/ollama/server/internal/client/ollama 0.174s ? github.com/ollama/ollama/server/internal/internal/backoff [no test files] ok github.com/ollama/ollama/server/internal/internal/names 0.003s ok github.com/ollama/ollama/server/internal/internal/stringsx 0.007s ? github.com/ollama/ollama/server/internal/internal/syncs [no test files] ? github.com/ollama/ollama/server/internal/manifest [no test files] ok github.com/ollama/ollama/server/internal/registry 0.014s ? github.com/ollama/ollama/server/internal/testutil [no test files] ok github.com/ollama/ollama/template 1.511s ok github.com/ollama/ollama/thinking 0.002s ok github.com/ollama/ollama/tools 0.006s ? github.com/ollama/ollama/types/errtypes [no test files] ok github.com/ollama/ollama/types/model 0.003s ? github.com/ollama/ollama/types/syncmap [no test files] ? github.com/ollama/ollama/version [no test files] >>> ollama: Entering fakeroot... >>> ollama-doc*: Running split function doc... 'usr/share/doc' -> '/home/buildozer/aports/community/ollama/pkg/ollama-doc/usr/share/doc' 'usr/share/licenses' -> '/home/buildozer/aports/community/ollama/pkg/ollama-doc/usr/share/licenses' >>> ollama-doc*: Preparing subpackage ollama-doc... >>> ollama-doc*: Running postcheck for ollama-doc >>> ollama*: Running postcheck for ollama >>> ollama*: Preparing package ollama... >>> ollama*: Stripping binaries >>> ollama-doc*: Scanning shared objects >>> ollama*: Scanning shared objects >>> ollama-doc*: Tracing dependencies... >>> ollama-doc*: Package size: 376.8 KB >>> ollama-doc*: Compressing data... >>> ollama-doc*: Create checksum... >>> ollama-doc*: Create ollama-doc-0.10.1-r0.apk >>> ollama*: Tracing dependencies... so:libc.musl-s390x.so.1 so:libgcc_s.so.1 so:libstdc++.so.6 >>> ollama*: Package size: 38.2 MB >>> ollama*: Compressing data... >>> ollama*: Create checksum... >>> ollama*: Create ollama-0.10.1-r0.apk >>> ollama: Build complete at Fri, 01 Aug 2025 11:27:13 +0000 elapsed time 0h 3m 53s >>> ollama: Cleaning up srcdir >>> ollama: Cleaning up pkgdir >>> ollama: Uninstalling dependencies... ( 1/10) Purging .makedepends-ollama (20250801.112320) ( 2/10) Purging go (1.24.5-r1) ( 3/10) Purging cmake (4.0.3-r0) ( 4/10) Purging patchelf (0.18.0-r3) ( 5/10) Purging libarchive (3.8.1-r0) ( 6/10) Purging libbz2 (1.0.8-r6) ( 7/10) Purging libuv (1.51.0-r0) ( 8/10) Purging rhash-libs (1.4.6-r0) ( 9/10) Purging samurai (1.2-r7) (10/10) Purging xz-libs (5.8.1-r0) busybox-1.37.0-r21.trigger: Executing script... OK: 387 MiB in 108 packages >>> ollama: Updating the community/s390x repository index... >>> ollama: Signing the index...