Last week at 4YFN, I had the chance to sit down with Ricardo López Barquilla, VP Partner Engineering at Meta Reality Labs and AI. The conversation was part of a conference series on unlocking the power of AI with open innovation—you can see the agenda and full list of participants here 🚀
Ricardo has had an impressive career across top tech companies. Originally from Spain, he explained that he moved to the U.S. to learn English and open new opportunities—he joked that the second goal was the bigger success (though he still carries his Spanish accent 😉). Before joining Meta, he worked for more than 20 years at Microsoft, his last role was VP of Devices, leading Hardware & Software for Microsoft Surface and XBOX. Now, at Meta, Ricardo operates at the intersection of VR, AR, and AI. 🎮🕶️🤖
His team helps anyone who wants to work with Meta technologies make the most out of them—whether in the family of Apps, with Quest devices, Ray-Ban Meta, Pytorch, or Llama. I loved how he described his team as “the people that carry the brushes to Michelangelo”, ensuring creators and builders have the tools they need. 🎨🛠️
Meta’s Bet on VR/AR & AI
In 2024, Meta’s VR/AR and mixed reality segment generated $2.1 billion in revenue, growing 40% YoY, with devices like the Meta Quest 3S becoming the best-selling console on Amazon US. 🏆📈
Ricardo joked about the common belief that VR glasses make you dizzy, encouraging the audience to try the latest devices. He shared a cool example of him playing ping-pong with his dad in Madrid while being in Seattle—a glimpse into the power of immersive technology. 🏓🌍
When asked about the most exciting recent launch, he had no doubt: Ray-Ban Meta smart glasses—a product that brings AI and AR into everyday life. 🕶️✨
Ricardo emphasized their focus on creating a beautiful, stylish, and practical product. Unlike VR, which immerses users in digital environments, the Ray-Bans are built for real-world use—letting you take WhatsApp calls, capture Instagram moments while your daughter runs to hug you, or while you exercise, in a hands-free experience, without having to hold a smartphone. He also highlighted the potential of Rayban Meta glasses AI-driven real-time translation and how you can ask Meta AI questions about things you see around you. Truly an amazing experience! 🌎🎤📷
I have to say—I tried them, and wow! 🤯 You can check them out here.
The Future of Devices: A Post-Smartphone Era?
This brought to a bigger question: Are we moving beyond screens?
With AI-powered wearables, smart assistants, and mixed reality devices, we’re shifting toward more natural, AI-driven interfaces. Instead of tapping on screens, we’ll interact with technology seamlessly—through voice, gestures, and immersive experiences. 🎙️🖐️🧠
In the future, we may look back and feel awkward about how people once walked around, completely absorbed by tiny smartphone screens. 📱👀
I’d love to hear your thoughts about this topic!
AI Strategy: Meta AI & Open Source
Speaking of AI-driven interfaces, we also discussed Meta AI, Llama Models, and why Meta is betting on open-source AI. 🤖🌍
Ricardo explained that Meta AI is already available in WhatsApp, you just have to mention @Meta AI in group chats to get recipe ideas, research fun trips, or find things to do. You can also ask Meta AI to edit photos and answer questions. While it's not yet available in Europe, it already has 750M+ monthly active users. More info here: Meta AI. 🚀
Regarding Llama, he shared how it started in 2023 as an open-source model for researchers. The objective was to empower researchers that had access to a limited amount of data, and give them the possibility to access models that have already been trained with billions of tokens, so that they could focus on the problem they wanted to solve, not on the tools. After the release, they received a lot of interest to have it for commercial use, so they launched Llama 2. And with all the feedback they gathered from developers, such as: “the context window needs to be bigger, we need multi-lingual, we need multi-modal”, they fine-tuned the model and that led to Llama 3.1, Llama 3,2, Llama 3.3 and very soon they will release Llama 4. 🦙⚡
Ricardo highlighted the outstanding performance of Llama 3.1, and noted how the Llama 3.3 70B model now offers similar capabilities to the Llama 3.1 405B model—delivering higher quality and efficiency at a fraction of the cost. 💡💰
For more insights on Llama 3.1’s performance versus other leading models such as GPT-4o (OpenAI), Gemini Pro 1.5 (Google) or Claude 3.5 sonnet (Anthropic), check out the comparison Meta published here. 📊🔍 Things move so quickly that I’m looking forward to see how Llama 4 compares to Gemini 2.0, Claude 3.7 sonnet and others.
In any case, Llama’s main differentiation point is Meta’s commitment to open-source AI. Ricardo highlighted that open-source AI is the safest, most adaptable, and most cost-efficient approach to AI development. This openness creates significant opportunities for developers and researchers while empowering entrepreneurs to build products without being confined to closed ecosystems.
We also shared the importance of partnerships and Open Innovation. Ricardo explained that his team focus was to build a partner ecosystem with different organizations in order to have AI models available across different devices, cloud providers, and telecom networks. 🔓📡
For more on Meta’s open-source vision, you can read Mark Zuckerberg’s letter where he writes about why Open Source AI is good for Developers, for Meta and for the World 📜🌍
BTW—recent news revealed that Meta AI will soon launch its own app to compete with OpenAI’s ChatGPT. More details here: CNBC article. 📱💡
Advice for AI Startups
With so many founders in the room at 4YFN, I asked Ricardo:
What advice would you give to startups building AI-powered products today?
His answer was clear:
🌍 Leverage open-source AI to maintain control and adaptability.
🚀 Don’t rely on a single provider for AI models or data—stay flexible.
🛠 Choose the right model for your needs—smaller, specialized models can be more efficient for specific verticals.
⚡ Move fast, be bold and keep pushing.
Without a doubt, this was an eye-opening conversation on the future of AI and the next wave of innovation. 🔮✨ Thanks a lot Ricardo for your time!
Did you find it interesting? I’d love to hear your thoughts on the future of Devices and Open Source AI models! Feel free to drop a comment or reach out. 🚀💬
Subscribe now to read more insights and stories on how to surf the wAIve!
Good interview. Thanks for sharing
Great piece!!