News

When bringing the latest AI models to edge devices, it’s tempting to focus only on how efficiently they can perform basic calculations—specifically, “multiply-accumulate” operations, or MACs.
Small Language Models (SLMs) bring AI inference to the edge without overwhelming the resource-constrained devices. In this article, author Suruchi Shah dives into how SLMs can be used in edge ...
Additionally, BrainChip is working on quantizing the model to 4 bits, so that it will efficiently run on edge device hardware.
Edge AI may be one of the most exciting frontiers in technology today, but its trajectory is being shaped not just by chips or data models, but by the IP behind them.
Across the tech ecosystem, organizations are coalescing around a shared vision: The smarter future of AI lies at the edge.
Geniatech's M.2 AI accelerator module, powered by Kinara's Ara-2 NPU, delivers 40 TOPS INT8 compute performance at low power ...
By distilling DeepSeek-R1 into smaller versions, developers can leverage state-of-the-art AI performance on edge devices without requiring expensive hardware or cloud connectivity. Why this matters ...
Mitsubishi Electric Corporation (TOKYO: 6503) announced today that it has developed a language model tailored for manufacturing processes operating on edge d ...
Ceva, Inc., the leading licensor of silicon and software IP that enables Smart Edge devices to connect, sense and infer data more reliably and efficiently, and Edge Impulse, the leading platform ...