Loading...
Loading...
Browse all stories on DeepNewz
VisitPerformance improvement metrics of Meta LLM Compiler by September 30, 2024
Less than 80% • 33%
80% to 90% • 33%
More than 90% • 33%
Published performance benchmarks and studies
Meta Unveils LLM Compiler, Surpassing GPT-4 with 7B and 13B Parameters
Jun 27, 2024, 05:41 PM
Meta has announced the Meta LLM Compiler, a family of models built on Meta Code Llama with additional code optimization and compiler capabilities. These models can emulate the compiler, predict optimal passes for code size, and disassemble code. Notably, the Meta LLM Compiler beats GPT-4 on code size improvement and disassembly, achieving 77% of the optimizing potential of an autotuning search and 45% disassembly round trip. The models, which work with x_86 assembly and LLVM-IR, are available with 7B and 13B parameters and can be fine-tuned for new tasks. This release marks a significant advancement in AI-driven code optimization.
View original story
Accuracy • 33%
Reasoning capabilities • 33%
Context handling • 33%
Yes • 50%
No • 50%
20% improvement • 25%
30% improvement • 25%
40% improvement • 25%
50% or more improvement • 25%
Yes • 50%
No • 50%
GPQA Diamond (CoT) • 25%
Math (CoT) • 25%
Other AI Benchmark • 25%
None • 25%
Less than 75% • 25%
75% to 79.9% • 25%
80% to 84.9% • 25%
85% or more • 25%
Yes • 50%
No • 50%
Training Llama 4 • 25%
Surpassing 1 exaflop performance • 25%
Winning an AI benchmark competition • 25%
Other milestone • 25%
Chatbots • 25%
Virtual assistants • 25%
Content generation • 25%
Coding • 25%
MMLU • 25%
ARC • 25%
GSM8K • 25%
None by June 30, 2024 • 25%
Meta LLM Compiler > • 33%
Equal market share • 33%
GPT-4 > • 33%