Thursday, August 28, 2025
HomeGuest BlogsWhy CloudThrill Believes the Future of AI Lies in Open Source and...

Why CloudThrill Believes the Future of AI Lies in Open Source and Flexibility by Petar Vojinovic


Petar Vojinovic

Published on: August 25, 2025
Writer

CloudThrill was founded to challenge the status quo of cloud and AI adoption. Frustrated by how vendors often lock organizations into rigid platforms, Founder Kosseila Haddalene set out to build a vendor- and platform-agnostic company that puts clients first. In this interview with SafetyDetectives, Kosseila explains how CloudThrill empowers businesses to design flexible, secure, and cost-efficient AI infrastructure rooted in Open Source freedom. He also shares why infrastructure, not flashy apps, is the real backbone of enterprise AI, the risks of proprietary platforms, and what excites him most about the future of AI innovation.

What inspired CloudThrill to take a vendor- and platform-agnostic path?

CloudThrill was born out of a frustration I experienced throughout my career, watching organizations get strong-armed during presales (as an engineer). Too often, companies are locked into a single vendor or cloud platform. Instead of technology serving their goals, they end up serving the vendor’s roadmap. That never sat right with us , so we wanted to flip that script.
At CloudThrill, we believe customers should never feel boxed in by one technology’s limitations. We tailor solutions to the client, not the other way around. That means exploring the full spectrum of possibilities, from hyperscalers to Open Source AI solutions, and crafting what actually works best for them, both today and in the future.

Our motto is simple: “challenge your potential.” For us, that’s not just about solving problems as they come up, it’s about pushing clients to think bigger, experiment smarter, and embrace flexibility so they’re always ready for their next big move. With a vendor- and platform-agnostic mindset, freedom of choice, and zero strings attached, we make that vision real.

What drove CloudThrill to bet on AI infrastructure? What do you think is underrated in the AI space?

We saw the hype train around AI apps, agents’ craziness, gimmicky productivity add-ons. But in the B2B space the real bottleneck is infrastructure. If you don’t get the foundations right in terms of privacy, cost efficiency, scalability, and Open-Source flexibility, then everything built on top is shaky. What’s underrated is the “boring plumbing”: orchestration, networking, GPU economics, compliance, security. Everyone wants to talk about the shiny app layer, but few want to dig into the hard stuff that actually determines whether you’re allowed to ship AI in production. That’s where we come in, helping new adopters build AI stacks with full blessing of their SecOps auditors.

The best part for customers is that it can be done with 100% Open Source frameworks, from inference engines (vLLM/Ollama) to the Open models that run on top (llama, mixtral, DeepSeek, gemma etc.). This took months of training and partnership with labs like LMCache, but we’re proud of the progress and ready to help clients build their AI future.

Why aren’t proprietary AI platforms the solution for a private AI audience?

Because they come with too many risks. Security breaches have already exposed customer data, and even with APIs your information can pass through third-party processors (i.e, snowflake). On top of that, you give up full control since interactions are often used to train their models, which is a non-starter for sensitive business data. In fact, even U.S. court orders have required OpenAI to retain deleted chats.

Then there’s cost and transparency. Proprietary platforms operate as black boxes with shifting pricing models tied to token usage (you pay for input, output and even thinking tokens). That makes it hard to predict expenses or even understand how decisions are being made. It’s no surprise that highly regulated sectors , from banks to healthcare, have restricted or outright banned ChatGPT and similar platforms.

At CloudThrill we take the opposite view: companies should own their AI stack. By building on Open-Source models and local inference, organizations can maintain privacy, stay compliant, and keep full control over their data and costs. That’s the path to a truly private AI practice.

What challenges do you see for organizations adopting AI?

Three stand out:

  • Skills gap: Everyone wants AI, but few teams know how to operationalize it securely and at scale. Many organizations also lack internal expertise or even clarity on the right use cases to pursue. And don’t forget this technology is barely two years old. It jumped from research papers to production almost overnight.
  • Cost surprises: GPU bills skyrocket without optimization. What starts as a small pilot often turns into sticker shock when workloads scale, especially if no long-term budget is planned. CIOs are not PhD students in machine learning, and they often need extra guidance just to evaluate which GPU series or chip combination makes sense for their workloads. Without that, companies end up overspending fast.
  • Data governance: Handling sensitive data in AI workflows without breaking compliance or sovereignty rules is still new territory. Think of it like a SOC2 equivalent for AI. Not every department can freely share data, yet decision makers still need access to high-level insights. Do you create a fine-tuned model for each vertical? A global RAG system? Or manage access at the application level?

These are the kinds of unresolved questions organizations face today. It is up to the AI professional community to turn those challenges into opportunities by helping customers filter signal from noise, while building AI infrastructure that is cost-efficient and compliant without cutting corners.

What do you believe are the smartest moves in the AI industry right now ?

I think one of the smartest moves we’re seeing in AI right now is companies focusing less on the hype around consumer-facing tools and more on building hybrid enterprise-grade AI services. For example, look at Cohere, they’re not out there chasing consumer buzz with splashy model releases or tiered subscriptions. Instead, they’re quietly closing big deals with telcos and financial institutions like Bell , The Government of Canada and RBC.

Their model is about combining deep neural network expertise with the real, business-critical data of large enterprises, and giving themselves (and their customers) the time to iterate, make mistakes, and perfect solutions before productizing them at scale. A striking resemblance to how the likes of SAP became the backbone for ERP systems: not flashy, but deeply embedded and indispensable. That’s the kind of execution that hits all the boxes, sustainable, enterprise-first, and transformative in the long run.

What excites you most, and how do you see AI infrastructure evolving in the next 5 years?

What excites me is how fast the foundations are evolving. Open models and open infrastructure are already changing the game, but the real breakthrough is quantization. By compressing weights, it reduces VRAM pressure and makes it possible to run huge models on smaller GPUs and even CPUs. Techniques like bitnet or Unsloth can shrink models by up to 80%, letting something like DeepSeek-V3 run on just a couple of L4 GPUs . A huge leap in accessibility.

The other side of the coin is energy. Data center electricity consumption is projected to more than double to nearly 945 TWh by 2030, six times Argentina’s total consumption today. With power grids already under strain, as many as 20% of planned projects could be delayed. That makes efficiency the defining challenge of the next wave.

This is where I am excited to see new players entering the race. Companies like Positron, Furiosa, and d-Matrix are building AI-specific silicon designed for transformer inference with the best combination of performance, power efficiency, and total cost of ownership. If they succeed, they will not just disrupt the GPU incumbents, they will help solve the sustainability puzzle that will ultimately decide how far and fast AI can scale.

RELATED ARTICLES

Most Popular

Dominic
32236 POSTS0 COMMENTS
Milvus
80 POSTS0 COMMENTS
Nango Kala
6609 POSTS0 COMMENTS
Nicole Veronica
11779 POSTS0 COMMENTS
Nokonwaba Nkukhwana
11828 POSTS0 COMMENTS
Shaida Kate Naidoo
6719 POSTS0 COMMENTS
Ted Musemwa
7002 POSTS0 COMMENTS
Thapelo Manthata
6678 POSTS0 COMMENTS
Umr Jansen
6690 POSTS0 COMMENTS