6 lines
220 B
Plaintext
6 lines
220 B
Plaintext
Meta-Llama-3-8B-Instruct.yaml
|
|
Meta-Llama-3-8B-Instruct-FP8.yaml
|
|
Meta-Llama-3-8B-Instruct-FP8-compressed-tensors.yaml
|
|
Meta-Llama-3-8B-Instruct-INT8-compressed-tensors.yaml
|
|
Qwen2-1.5B-Instruct-INT8-compressed-tensors.yaml
|