After a few days back and still thinking about all the greatness I have witnessed around AI, I would like to reflect my experiences on the Microsoft Build 2024 event. I joined as an AI Expert at the Microsoft Expert Booths where I talked with attendees, other MVPs, Experts and Microsoft folks about the great AI Community, Azure AI Studio, LLMOps, and Multimodal modals, and of course joined the keynotes and some AI sessions.

How I got at Build 2024

It started some weeks before the event when Microsoft approached MVPs to take part at the Microsoft Expert Booth.
While I’m a Mixed Reality MVP, the last year I have been involved a lot in AI and Generative AI within the Microsoft landscape. We even build our Avanade GenReal accelerator platform that brings Digital Twins, Fabric, Dynamics 365 Field Services and AI together into an immersive experience for different roles in any organization. Enough said about that. I will spend a separate post on Avanade GenReal soon. Particularly how to transform prompt engineering (which we currently do in code using Azure Functions) into Prompt Flow using Microsoft Azure AI Studio.
I was extremely happy and grateful, even as Mixed Reality MVP, to be selected as one of the AI Experts. After arranging travel with Avanade, I flew out on Monday morning to Microsoft Build 2024 for a week of experiences, meeting others, talking about AI and what it means for organizations to apply it in their current production processes.


We got a nice surprise during the MVP Connect on Monday. The Microsoft Team had arranged that all MVPs visiting Microsoft Build 2024 in person were seated just in front of the stage at the keynotes on day one and day two. It was a privilege to be there on that spot. Sitting on the 8th row in front of the stage and seeing Satya Nadella for the first time live is just amazing! And I think it was a good thing too. The MVPs possibly produced more pictures, videos, posts, and online social media messages then all the media (which were seated on the left and right of us) together. This is something the Microsoft Team should keep on doing. A big thank you to the team that arranged this amazing experience!


The number of announcements around AI and in particular Copilot at Microsoft Build 2024, was almost insane. It is incredible to see how Microsoft has moved forward with AI in less then a year to a solid market leader in the AI revolution happening globally. They are frontrunner at a technology level, the available infrastructure, security, introduction of Copilot almost in any device, application, and platform and safe use from several aspects through their responsible AI framework. It can be said that AI with Microsoft is at a stage that it makes sense to introduce it into organizations.

we are excited to announce…

Not only within the modern workplace, but also within production processes. And this is something I’m always very careful to say when I talk with clients. AI with Microsoft starts to get at a mature level.
While we heard many times “we are excited to announce…”, there we a few announcements that stand out to me. Each of them you can find below with a short explanation.

Multimodal models

These models allow different types of input and produces, depending on the request, different types of output. Types of input and output differs from speech, text, and media like images and video. We have the large language model ChatGPT4o and the small language model Phi-3-Vision. It works incredibly good already. As an example, we used an image of a chart. And we ease we were able to retrieve all the chart values from the image into a JSON format. I was even able to ask ChatGPT4o to provide me the correct application name that was used to generate the chart. It reasoned based on characteristics like font, colors, and other the correct application name. The model Phi-3-vision has less capabilities due to a smaller vectorized model, but still provided a lot of detail on asking questions. These small language models are performing well and are a good alternative for AI solutions that can not rely on connectivity and need to run locally. Still these SLMs require a well performing GPU on your device to run these.

Minecraft and LLM

There was an incredible cool demo shown during the keynote where LLM was used to support a Minecraft player in crafting a weapon in the game. The player communicated through speech with the LLM. The LLM provided support in determining the correct and missing materials to craft the weapon. The player was guided in getting these materials. While playing the game they encountered a zombie and managed to find a place to hide. To me this demo provided a glimpse in the nearby future in gaming. Using LLMs to get guidance, solving a tricky problem, getting the shortest route to the next step in the game and much more.

Windows Volumetric Apps

Microsoft is working together with Meta to bring Windows Volumetric Apps on Meta Quest. This allows you to extend your Windows desktop applications into 3D space. Imagine having a Windows application that shows your inventory of products that you sell or use in your production process. Firstline workers can bring these into a 3D immersive space, like Microsoft HoloLens experiences, into a Meta Quest headset with synced manipulation between the Windows application on the desktop screen and the experience inside the headset. There are not yet much details, but you can sign up for the developer program as of today.

Azure AI Studio

Your one-stop-shop for building AI projects. This tool brings all the AI elements together into a platform that allows you to create projects, connect these to an Azure Subscription, define connections to services and much more. Especially prompt engineering becomes fun by using prompt flow. It allows you to create prompts to a series of workflow steps. Each step can connect to a service, run a piece of Python code and much more. Creating more complex prompts that requires different services all together becomes suddenly easier and more visually. Prompt flows can be run in the platform to see if they work well, they can be debugged, and at some point, by using LLMOps being rolled out to different environments like development, test, and production through DevOps pipelines. It is even possible to evaluate your flow in that same DevOps pipeline using evaluation flows by testing on things like Groundedness, Relevance, Coherence, Fluency and much more. I will be creating some additional blog posts about this soon to explain this in more detail.

AI Agents

AI Agents are used to delegate authority of certain tasks to Copilots. These Copilots act like agents that autonomous perform their tasks. They will be able to run, orchestrate, and integrate into business processes. An example, something which is extremely timely within the organization I work for, is setting up a meeting with several colleagues. What if everyone in the organization would have an agent that on their behalf can arrange meetings in their agenda. These agents could sort out together when the meeting can take place best without any interference of people themselves. That would definitely be a timesaver.

AI Expert at the Microsoft Expert Booth

As mentioned, I was AI Expert at the Microsoft Expert Booth for the AI Community, Azure AI Studio, LLMOps and Multimodal models. I spoke with so many people and listen to their interesting stories and together trying to solve simple but also complex scenarios with all the new tools, products, and services Microsoft is bringing to the AI table. A few things stood out.

  • There are more roads leading to Rome. You can achieve the same through different ways. Always keep that in mind. It depends mostly on architecture, how prompts are being built and where data is located. An example: We had someone who wanted to expose data from a SQL database by allowing users to ask questions in normal language that are translated to SQL queries. A recommended way would be defining a system prompt in which you specify the database scheme and some other information and in what format it needs to be returned. You could also think about building a prompt that works like an interpreter/ translator. I also spoke with someone who managed to do this using M365 Copilot and an elastic database in the back.
  • Many asked the question about the difference between Azure Machine Learning Studio and Azure AI Studio. And don’t forget we have even Copilot Studio and Azure OpenAI Studio. I like to explain it by the role of the person who is building the AI solution. Azure Machine Learning Studio is more for data scientists while Azure AI Studio is more for Power users with some development experience. Copilot Studio is more a low/no-code platform for creating Copilots and Azure OpenAI Studio is accessed from the Azure Open AI service in the Azure Portal and provides a more simplified interface. I will spend some more time in a separate post to explain these differences in detail and when to use what environment. Keep in mind that there is a lot of overlap, and you can easily jump from one studio to another.

My expectations for the future

My expectations are that their will be an exponentially increase of large and small language models. It will go quicker and quicker with releases of new versions of existing models and the introduction of new models. My hope for the future is investments in AI on the edge. This is not like small language models. This is the ability to run AI directly on sensors, devices, appliances, and such. This would be ground-breaking for industries. And my expectation is that Azure AI Studio is going to be one-stop-shop for among other things like building, maintaining, testing, and evaluating prompts for your custom AI solutions.

Categorized in:

AI, Microsoft Build 2024,

Last Update: May 30, 2024