-
Notifications
You must be signed in to change notification settings - Fork 12.7k
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Description
Jinja fails on gpt-oss-120b
when using Vulkan.
Name and Version
version: 6097 (9515c61)
built with clang version 19.1.5 for x86_64-pc-windows-msvc
Operating systems
Windows 11
GGML backends
Vulkan
Hardware
AMD RyzenAI MAX+ 395 w/ Radeon 8060S (Strix Halo)
Models
gpt-oss-120b-Q4_K_M
https://huggingface.co/unsloth/gpt-oss-120b-GGUF
Problem description & steps to reproduce
To reproduce the issue, simply download unsloth/gpt-oss-120b-GGUF
(I'm using the Q4_K_M
variant) and run:
llama-server -m "C:\<MODEL_PATH>\gpt-oss-120b-Q4_K_M-00001-of-00002.gguf" -ngl 99 --jinja
This is the output that I see:
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG...
I can confirm this happens deterministically. I can also confirm the problem does not happen when the --jinja
flag is removed.
jeremyfowers
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working