OpenAI Group PBC and Mistral AI SAS today introduced new artificial intelligence models optimized for cost-sensitive use ...
Nvidia's KV Cache Transform Coding (KVTC) compresses LLM key-value cache by 20x without model changes, cutting GPU memory costs and time-to-first-token by up to 8x for multi-turn AI applications.
A Stanford engineer has demonstrated that frontier language models can run directly on everyday edge devices using convex ...
Artificial intelligence (AI) is rapidly transforming healthcare. AI systems can now detect diabetic eye disease from retinal photos and analyze CT images for signs of early-stage lung cancers and ...
Google Research has proposed a training method that teaches large language models to approximate Bayesian reasoning by ...
Memory-based Small Language Models deployed across virtualized, highly distributed telecommunications networks achieve ...
As large language models (LLMs) gain momentum worldwide, there’s a growing need for reliable ways to measure their performance. Benchmarks that evaluate LLM outputs allow developers to track ...
GPT-4o achieved ICC/CCC of 0.815/0.866 versus in-person SALT scoring and 0.833/0.817 versus image-based scoring, while expert ...
The world's first Tibetan large language model and its application, DeepZang, has been officially unveiled in Lhasa, ...
Real-world AI for robots is hard and expensive to create. Or is it? Researchers at a UK university just showed us how to ...
AI systems that understand and generate text, known as language models, are the hot new thing in the enterprise. A recent survey found that 60% of tech leaders said that their budgets for AI language ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results