VSORA

Jotunn8

ENABLING AI DATACENTER INFERENCE AT SCALE

COST / QUERY <$0.002

DeepSeek, LLAMA, GPT... all parameter sizes supported

Our Generative AI Offer

Jotunn8

Jotunn8

3.2 Petaflops (dense) tailormade for easy, CUDA-free Generative AI use. Large LLMs, like Llama3-405B, GPT-4 or DeepSeek-R1, can quickly be implemented and deployed using standard software.

Tyr4

Tyr4

3.2 Petaflops (sparse) for any AI application, including advanced Generative AI completely CUDA-free

Tyr2

Tyr2

1.6 Petaflops (sparse) for easy, CUDA-free AI, including Generative AI. All completely CUDA-free

Tyr1

Tyr1

800 Teraflops (sparse) for any AI application, including Generative AI. All completely CUDA-free

Autonomous Driving / ADAS

CUDA-Free High-Performance
Low-Power Computing
for the Vehicle

3.2 PetaFlops / 100 W

Any Algorithm
Any Host Processor
Fully Programmable

Our AD/ADAS Offer

Tyr4

Tyr4

3.2 Petaflops for any AD / ADAS application. All completely CUDA-free.

Tyr2

Tyr2

1.6 Petaflops for any AD/ADAS application. All completely CUDA-free.

Tyr1

Tyr1

800 Teraflops for any AD / ADAS application. All completely CUDA-free.

Zonal Architecture

be a part of something great

take the first step.
we will do the rest.

AI
Handshake
Scroll to Top