--------------------------------------------------------------------------------
Fedora Update Notification
FEDORA-2024-89c69bb9d3
2024-11-14 03:00:19.249660
--------------------------------------------------------------------------------

Name        : llama-cpp
Product     : Fedora 41
Version     : b3561
Release     : 1.fc41
URL         : https://github.com/ggerganov/llama.cpp
Summary     : Port of Facebook's LLaMA model in C/C++
Description :
The main goal of llama.cpp is to run the LLaMA model using 4-bit
integer quantization on a MacBook

* Plain C/C++ implementation without dependencies
* Apple silicon first-class citizen - optimized via ARM NEON, Accelerate
  and Metal frameworks
* AVX, AVX2 and AVX512 support for x86 architectures
* Mixed F16 / F32 precision
* 2-bit, 3-bit, 4-bit, 5-bit, 6-bit and 8-bit integer quantization support
* CUDA, Metal and OpenCL GPU backend support

The original implementation of llama.cpp was hacked in an evening.
Since then, the project has improved significantly thanks to many
contributions. This project is mainly for educational purposes and
serves as the main playground for developing new features for the
ggml library.

--------------------------------------------------------------------------------
Update Information:

Update to b3561
--------------------------------------------------------------------------------
ChangeLog:

* Tue Nov  5 2024 Tom Rix  - b3561-1
- Update to b3561
* Thu Jul 18 2024 Fedora Release Engineering  - b3184-4
- Rebuilt for https://fedoraproject.org/wiki/Fedora_41_Mass_Rebuild
* Sat Jun 22 2024 Mohammadreza Hendiani  - b3184-3
- added changelog
* Sat Jun 22 2024 Mohammadreza Hendiani  - b3184-2
- added .pc file
* Sat Jun 22 2024 Mohammadreza Hendiani  - b3184-1
- upgraded to b3184 which is used by llama-cpp-python v0.2.79
* Tue May 21 2024 Mohammadreza Hendiani  - b2879-7
- removed old file names .gitignore
* Sun May 19 2024 Tom Rix  - b2879-6
- Remove old sources
* Sun May 19 2024 Tom Rix  - b2879-5
- Include missing sources
--------------------------------------------------------------------------------
References:

  [ 1 ] Bug #2304782 - CVE-2024-42477 llama-cpp: global-buffer-overflow in ggml_type_size [fedora-all]
        https://bugzilla.redhat.com/show_bug.cgi?id=2304782
--------------------------------------------------------------------------------

This update can be installed with the "dnf" update program. Use
su -c 'dnf upgrade --advisory FEDORA-2024-89c69bb9d3' at the command
line. For more information, refer to the dnf documentation available at
http://dnf.readthedocs.io/en/latest/command_ref.html#upgrade-command-label

All packages are signed with the Fedora Project GPG key. More details on the
GPG keys used by the Fedora Project can be found at
https://fedoraproject.org/keys
--------------------------------------------------------------------------------

-- 
_______________________________________________
package-announce mailing list -- package-announce@lists.fedoraproject.org
To unsubscribe send an email to package-announce-leave@lists.fedoraproject.org
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org
Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue

Fedora 41: llama-cpp 2024-89c69bb9d3 Security Advisory Updates

November 14, 2024
Update to b3561

Summary

The main goal of llama.cpp is to run the LLaMA model using 4-bit

integer quantization on a MacBook

* Plain C/C++ implementation without dependencies

* Apple silicon first-class citizen - optimized via ARM NEON, Accelerate

and Metal frameworks

* AVX, AVX2 and AVX512 support for x86 architectures

* Mixed F16 / F32 precision

* 2-bit, 3-bit, 4-bit, 5-bit, 6-bit and 8-bit integer quantization support

* CUDA, Metal and OpenCL GPU backend support

The original implementation of llama.cpp was hacked in an evening.

Since then, the project has improved significantly thanks to many

contributions. This project is mainly for educational purposes and

serves as the main playground for developing new features for the

ggml library.

Update Information:

Update to b3561

Change Log

* Tue Nov 5 2024 Tom Rix - b3561-1 - Update to b3561 * Thu Jul 18 2024 Fedora Release Engineering - b3184-4 - Rebuilt for https://fedoraproject.org/wiki/Fedora_41_Mass_Rebuild * Sat Jun 22 2024 Mohammadreza Hendiani - b3184-3 - added changelog * Sat Jun 22 2024 Mohammadreza Hendiani - b3184-2 - added .pc file * Sat Jun 22 2024 Mohammadreza Hendiani - b3184-1 - upgraded to b3184 which is used by llama-cpp-python v0.2.79 * Tue May 21 2024 Mohammadreza Hendiani - b2879-7 - removed old file names .gitignore * Sun May 19 2024 Tom Rix - b2879-6 - Remove old sources * Sun May 19 2024 Tom Rix - b2879-5 - Include missing sources

References

[ 1 ] Bug #2304782 - CVE-2024-42477 llama-cpp: global-buffer-overflow in ggml_type_size [fedora-all] https://bugzilla.redhat.com/show_bug.cgi?id=2304782

Update Instructions

This update can be installed with the "dnf" update program. Use su -c 'dnf upgrade --advisory FEDORA-2024-89c69bb9d3' at the command line. For more information, refer to the dnf documentation available at http://dnf.readthedocs.io/en/latest/command_ref.html#upgrade-command-label

Severity
Name : llama-cpp
Product : Fedora 41
Version : b3561
Release : 1.fc41
URL : https://github.com/ggerganov/llama.cpp
Summary : Port of Facebook's LLaMA model in C/C++

Related News