The VSORA Company

AI – Demystified and Delivered

80111

AI is transforming the world
– we're transforming AI

AI is reshaping every industry—and now we’re redefining its power. Our mission is to democratize AI, generative and agentic AI as well as reasoning models, delivering maximum profitability for customers while minimizing environmental and societal impact.

Unlike conventional accelerators built for training, VSORA’s Jotunn 8 is engineered exclusively for inference. By optimizing latency-sensitive workloads, it achieves lightning-fast response times, dramatically higher throughput, and a significantly lower cost-per-query.

Discover how Jotunn8 makes AI more accessible, efficient, and sustainable than ever.

We are VSORA

VSORA is a French fabless semiconductor company delivering ultra-high-performance AI inference solutions for both data centers and edge deployments. Our proprietary architecture achieves exceptional implementation efficiency, ultra-low latency, and minimal power draw—dramatically cutting inference costs across any workload.

Fully programmable and agnostic to both algorithms and host processors, our chips serve as versatile companion platforms. A rich instruction set lets them seamlessly handle pure AI, pure DSP, or any hybrid of the two, all without burdening developers with extra complexity.

To streamline development and shorten time-to-market, VSORA embraces industry standards: our toolchain is built on LLVM and supports common frameworks like ONNX and PyTorch, minimizing integration effort and customer cost.

vsora-europe
israel-andrade-YI_9SivVt_s-unsplash

Join VSORA!

Drive the Future of Generative AI

Our rapid growth is powered by a talented, multicultural team united by pride in our work and a commitment to helping everyone shine. We’ve traded rigid hierarchies for a relaxed, friendly atmosphere where dedication, ownership, collaboration, agility, and engagement guide everything we do. For us, work isn’t a checklist—it’s a shared mission. You’re empowered to shape a leading company and celebrate in its success.

We are currently looking for staff in the following locations:

Explore Tyr

Unmatched Performance at the Edge with Edge AI.

Flexibility

Fully programmable

Algorithm agnostic

Host processor agnostic

RISC-V core to offload & run AI completely on-chip

Memory

Capacity

HBM: 36GB

Throughput

HBM: 1 TB/s

Performance

Tensorcore (dense)

Tyr 4
fp8: 1600 Tflops
fp16: 400 Tflops

Tyr 2
fp8: 800 Tflops
fp16: 200 Tflops

General Purpose

Tyr 4
fp8/int8: 50 Tflops
fp16/int16: 25 Tflops
fp32/int32: 12 Tflops

Tyr 2
fp8/int8: 25 Tflops
fp16/int16: 12 Tflops
fp32/int32: 6 Tflops

Close to theory efficiency

Flexibility

Fully programmable

Algorithm agnostic

Host processor agnostic

RISC-V cores to offload host
& run AI completely on-chip.

Memory

Capacity

HBM: 288GB

Throughput

HBM: 8 TB/s

Performance

Tensorcore (dense)

fp8: 3200 Tflops
fp16: 800 Tflops

General Purpose

fp8/int8: 100 Tflops
fp16/int16: 50 Tflops
fp32/int32: 25 Tflops

Close to theory efficiency

Explore Jotunn 8

Introducing the World’s Most Efficient AI Inference Chip.