Latest in O1 Llm
Sort by
11 items
-
Alibaba researchers unveil Marco-o1, an LLM with advanced reasoning capabilities
The model uses more cycles during inference to generate more tokens and review responses, improving its performance on reasoning tasks.Tech - VentureBeat - 9 hours ago -
Chinese researchers unveil LLaVA-o1 to challenge OpenAI’s o1 model
LLaVA-o1 breaks down the answer into multiple reasoning components and uses inference-time scaling to optimize each stage.Tech - VentureBeat - 5 days ago -
Chinese researchers unveil LLaVA-o1 to challenge OpenAI’s o1 model
LLaVA-o1 breaks down the answer into multiple reasoning components and uses inference-time scaling to optimize each stage.Tech - VentureBeat - 5 days ago -
Study finds LLMs can identify their own mistakes
It turns out that LLMs encode quite a bit of knowledge about the truthfulness of their answers, even when they give the wrong one.Tech - VentureBeat - October 29 -
Thomson Reuters’ CoCounsel redefines legal AI with OpenAI’s o1-mini model
Thomson Reuters integrates OpenAI's o1-mini model into CoCounsel legal assistant, pioneering a multi-AI approach with Google and Anthropic models for enhanced legal workflows.Tech - VentureBeat - 2 days ago -
Why multi-agent AI tackles complexities LLMs can’t
While AGI and fully autonomous systems are still on the horizon, multi-agents will bridge the current gap between LLMs and AGI.Tech - VentureBeat - November 2 -
Puppygraph speeds up LLMs’ access to graph data insights
While PuppyGraph is less than a year old, it is already witnessing success with several enterprises, including Coinbase, Clarivate, Dawn Capital and Prevelant AI.Tech - VentureBeat - November 7 -
Here are 3 critical LLM compression strategies to supercharge AI performance
How techniques like model pruning, quantization and knowledge distillation can optimize LLMs for faster, cheaper predictions.Tech - VentureBeat - November 9 -
How Microsoft’s next-gen BitNet architecture is turbocharging LLM efficiency
A smart combination of quantization and sparsity allows BitNet LLMs to become even faster and more compute/memory efficientTech - VentureBeat - November 14 -
How custom evals get consistent results from LLM applications
Public benchmarks are designed to evaluate general LLM capabilities. Custom evals measure LLM performance on specific tasks.Tech - VentureBeat - November 14 -
DeepSeek’s first reasoning model R1-Lite-Preview turns heads, beating OpenAI o1 performance
The company’s published results highlight its ability to handle a wide range of tasks, from complex mathematics to logic-based scenarios.Tech - VentureBeat - November 20