Fedora 40: llama-cpp 2024-b07b0b41ec Security Advisory Updates
Summary
The main goal of llama.cpp is to run the LLaMA model using 4-bit
integer quantization on a MacBook
* Plain C/C++ implementation without dependencies
* Apple silicon first-class citizen - optimized via ARM NEON, Accelerate
and Metal frameworks
* AVX, AVX2 and AVX512 support for x86 architectures
* Mixed F16 / F32 precision
* 2-bit, 3-bit, 4-bit, 5-bit, 6-bit and 8-bit integer quantization support
* CUDA, Metal and OpenCL GPU backend support
The original implementation of llama.cpp was hacked in an evening.
Since then, the project has improved significantly thanks to many
contributions. This project is mainly for educational purposes and
serves as the main playground for developing new features for the
ggml library.
Update Information:
Update to b3561
Change Log
* Sat Oct 26 2024 Tom Rix
References
[ 1 ] Bug #2304712 - CVE-2024-42479 llama-cpp: Write-what-where in rpc_server::set_tensor [fedora-all]
https://bugzilla.redhat.com/show_bug.cgi?id=2304712
Update Instructions
This update can be installed with the "dnf" update program. Use su -c 'dnf upgrade --advisory FEDORA-2024-b07b0b41ec' at the command line. For more information, refer to the dnf documentation available at http://dnf.readthedocs.io/en/latest/command_ref.html#upgrade-command-label