Get Microsoft Certified with K21 Academy’s AI-102 Course – Hands-On Azure AI Labs Included

Microsoft Azure AI Engineer Associate Certification AI-102
Azure AI/ML

Share Post Now :

HOW TO GET HIGH PAYING JOBS IN AWS CLOUD

Even as a beginner with NO Experience Coding Language

Explore Free course Now

Table of Contents

Loading

This blog post is your comprehensive guide to mastering the Microsoft Azure AI Engineer Associate Certification (AI-102). We’ll walk you through all the essential Hands-On Labs for Azure AI-102 required to design and implement AI solutions on Azure.

Whether you’re pursuing self-paced learning or preparing as a team, this blog offers a detailed summary of each lab, helping you build the practical skills needed to excel in Artificial Intelligence Engineering on Microsoft Azure.

Get ready to dive into 40+ Key Labs that will set you on the path to certification success.

Here’s a quick sneak-peak of how to start learning design and implement an Azure AI solution & clear Microsoft Azure AI Engineer Associate Certification AI-102 by doing Hands-on.

AI-102 Learning PathSource: K21Academy

Lab 1: Prepare for an AI development project

In this lab, you’ll lay the foundation for a successful AI development project. You’ll gain a deeper understanding of the tools and resources required for AI development, setting up environments, and selecting the most suitable AI models for your project. Additionally, you’ll explore the essential steps for defining clear project objectives, identifying the necessary data sources, and establishing realistic timelines to ensure smooth execution. This lab will provide you with the knowledge needed to start your AI project on the right track.

AI development project, Source: K21Academy

Source: K21Academy

Key Takeaways from Lab:

  • Set Up in Azure AI Foundry: Sign in and create an AI hub and project to organize your development resources.
  • Integrate AI Services: Connect Azure AI resources like OpenAI to support your project.
  • Deploy GPT-4 Model: Deploy and fine-tune GPT-4 for tasks like product inquiries and customer sentiment analysis.
  • Test and Fine-Tune Model: Use the Chat Playground to test the model with real customer queries.
  • Manage Resources & Costs: Learn resource management and perform clean-up activities to minimize costs.
  • Collaborate and Customize: Customize models and work collaboratively within the project workspace.
  • Share Progress: Showcase your work on LinkedIn and in the community to build your professional profile.

By the end of the lab, you will be ready to initiate an AI development project with a clear roadmap and necessary infrastructure in place.

Lab 2: AI Foundry: Deploy & Compare Lang. Models

In this lab, you will deploy and compare various language models to understand their performance in different scenarios. You’ll work with popular pre-trained models, such as GPT, and explore how to fine-tune them for specific tasks. The lab will guide you through deploying models in a cloud environment, evaluating their efficiency, and comparing metrics like accuracy, speed, and resource usage.

AI Foundry : Deploy & Compare Lang. Models, Source: K21Academy

Source: K21Academy

Key Takeaways from Lab: 

  • Explore Models: Access Azure AI Foundry and explore GPT-4o and Phi-4-mini-instruct models.
  • Compare Models: Evaluate models based on accuracy, quality, and cost.
  • Deploy Models: Deploy both models into your project and test using real-time queries.
  • Evaluate Performance: Assess model responses and benchmark metrics.
  • Cost Analysis: Compare the cost-effectiveness and resource usage of both models.
  • Clean Up Resources: Delete resources to minimize costs.
  • Share Results: Share insights on LinkedIn and in the community.

By the end of the lab, you will have hands-on experience with deploying and optimizing language models for real-world applications.

Lab 3: Create a Gen AI Chat App

In this lab, you will build a Generative AI-powered chat application. The focus will be on integrating a language model, such as GPT, into a real-time chat interface.

You will learn how to set up the backend infrastructure, handle user inputs, and generate meaningful responses using the AI model. Additionally, you will explore techniques for optimizing the chat flow and ensuring a seamless user experience.

Create a Gen AI Chat App, Source: K21Academy

Source: K21Academy

Key Takeaways from Lab: 

  • Deploy a Model: Set up an Azure AI Foundry project and deploy a pre-trained GPT-4o model to handle IT queries and cloud management tasks.
  • Build a Client App: Develop a client application to interact with the deployed AI model, allowing real-time chat with the app.
  • Configure & Code: Configure the app settings, write code to connect to the Azure AI Foundry project, and enable communication between the app and model.
  • Test Real-Time Interactions: Run the chat app and test it by entering various queries, ensuring the model responds correctly to simulated IT and cloud questions.
  • Optimize for Cloud Management: The app helps the IT team answer questions about server health, cloud resources, and system configurations, improving efficiency.
  • Clean Up Resources: After completing the app, clean up the resources to avoid ongoing costs associated with the project.
  • Share Learning: Share your progress on LinkedIn and within the community to showcase your skills and connect with potential employers.

By the end of the lab, you will have developed a fully functional chat app capable of generating intelligent and context-aware conversations.

Lab 4: Manage Chat Conversation with Prompt Flow

In this lab, you will learn how to manage and structure chat conversations using prompt flow techniques. You will explore how to design dynamic conversation flows that adapt to user inputs by leveraging prompt engineering and state management.

The lab will guide you through setting up conversational logic, managing context across multiple exchanges, and ensuring seamless interactions by using prompt flow to adjust responses.

Manage Chat Conversation with Prompt Flow, Source: K21Academy

Source: K21Academy

Key Takeaways from Lab: 

  • Create a Project: Set up a new Azure AI Foundry project to manage your AI model and resources.
  • Deploy Generative AI Model: Deploy a GPT model to start building your AI system for customer interactions.
  • Design Prompt Flow: Use Azure AI Foundry’s Prompt Flow tool to design a conversation flow that manages customer interactions, ensuring context awareness and on-topic responses.
  • Test the Flow: Test your flow with sample customer queries to ensure the chatbot delivers accurate and relevant responses.
  • Deploy the Flow: Once satisfied with the flow, deploy it live for real-world use in the e-commerce application.
  • Optimize Resource Management: Clean up resources to avoid unnecessary costs after the deployment.
  • Share Learnings: Share your project and insights on LinkedIn and in the community to showcase your skills and connect with others.

By the end of the lab, you will have the skills to create more interactive, context-aware chat applications that can maintain coherent dialogues and provide accurate responses throughout the conversation.

Lab 5: Create a generative AI app that uses your data.

In this lab, you will develop a generative AI application that leverages your own dataset. You will learn how to prepare and preprocess data, select a suitable generative model, and integrate it into the app. The lab will guide you through fine-tuning the model using your data to generate meaningful outputs. You’ll also explore how to optimize the app for performance and accuracy.

Key Takeaways from Lab: 

  • Set Up Azure AI Foundry Project: Start by creating an Azure AI Foundry project to manage AI models and data efficiently.
  • Deploy Generative AI Models: Deploy GPT-4 and text-embedding models to process and generate insights from your company’s historical data.
  • Add Data to the Project: Import relevant company data (e.g., cloud logs) to train and enhance the AI model’s performance.
  • Create and Test Data Index: Build an index using Azure AI Search to organize data and ensure it’s accessible for AI processing.
  • Implement RAG Pattern: Use a Retrieval-Augmented Generation (RAG) client app to combine AI-generated responses with real-time data.
  • Deploy the Application: Deploy the model and index, making the AI app available for live customer support interactions or predictive maintenance.
  • Clean Up Resources: After deployment, clean up resources to avoid unnecessary costs and optimize the project environment.
  • Share Learning: Share your project on LinkedIn and in the community to showcase your experience with generative AI development.

By the end of the lab, you will have a fully functional generative AI app tailored to your unique dataset, capable of producing data-driven outputs for real-world use cases.

Lab 6: Fine-tune a Language Model

In this lab, you will learn how to fine-tune a pre-trained language model to perform specific tasks. You’ll begin by selecting an appropriate language model (such as GPT) and preparing a custom dataset to fine-tune it.

The lab will guide you through the process of training the model, adjusting hyperparameters, and evaluating its performance. You’ll also explore techniques for improving the model’s accuracy and generalization.

Fine-tune a Language Model, Source: K21Academy

Source: K21Academy

Key Takeaways from Lab: 

  • Set Up Project in Azure AI Foundry: Create a new project to manage the AI model and resources.
  • Deploy Base Model: Deploy a GPT-4 model as a base for fine-tuning.
  • Prepare Custom Data: Collect and format your data (e.g., customer queries) in JSONL format for training.
  • Fine-Tune the Model: Train the model on the custom dataset to improve its responses for specific use cases like customer support.
  • Test the Model: Compare the performance of the fine-tuned model against the base model by testing it in the Chat Playground.
  • Deploy Fine-Tuned Model: Deploy the trained model to make it available for real-world use.
  • Evaluate Model Performance: Assess improvements in the model’s ability to handle domain-specific terms and customer interactions.
  • Clean Up Resources: After deployment, clean up the resources to optimize costs and maintain an efficient environment.
  • Share Learnings: Share the results and insights from the lab on LinkedIn and in the community to build your professional profile.

By the end of the lab, you will have hands-on experience fine-tuning a language model, enabling it to generate more accurate and context-specific responses for your particular use case.

Lab 7: Explore content filters in Azure AI Foundry

In this lab, you will explore the use of content filters in Azure AI Foundry to manage and enhance the output of AI models. You will learn how to implement and customize content filtering techniques to ensure the generated content meets specific requirements, such as appropriateness, relevance, and accuracy.

The lab will guide you through setting up filters for text and images, adjusting their parameters, and applying them in various AI-powered applications.

Explore content filters in Azure AI Foundry, Source: K21Academy

Source: K21Academy

Key Takeaways from Lab:

  • Set up an AI Project in Azure AI Foundry: Begin by creating a project and deploying a model, such as the Phi-4 model, for customer support applications.
  • Content Filter Implementation: Use the built-in content filters to prevent harmful or inappropriate content, such as hate speech, profanity, or self-harm discussions.
  • Test Default Content Filters: Evaluate the default content filter by testing responses to sensitive prompts like self-harm or illegal activities to see how the filter blocks harmful content.
  • Create Custom Content Filters: Customize content filters for more specific needs, such as blocking certain types of language (violence, hate, sexual content, self-harm), and apply them to your deployed model.
  • Test Custom Filters: After applying custom content filters, test the model again with sensitive prompts to ensure harmful content is blocked effectively.
  • Deploy and Monitor the Model: Deploy the fine-tuned model with custom content filters and test its real-world performance, ensuring safety while maintaining responsiveness.
  • Clean Up Resources: After testing, clean up deployed resources to avoid unnecessary costs and maintain efficient project management.
  • Share Insights: Share your learnings on LinkedIn and community platforms to build your professional network and showcase your expertise in AI and content moderation.

By the end of the lab, you will have a solid understanding of content filtering mechanisms, helping you create more controlled and reliable AI outputs in your projects.

Lab 8: Evaluate generative AI performance

In this lab, you will focus on evaluating the performance of generative AI models. You will learn how to assess the quality of outputs generated by AI models based on various metrics such as accuracy, relevance, coherence, and creativity.

The lab will guide you through setting up evaluation frameworks, applying performance benchmarks, and interpreting results. Additionally, you will explore methods for fine-tuning models to improve their performance.

Evaluate generative AI performance, Source: K21Academy

Source: K21Academy

Key Takeaways from Lab: 

  • Set Up and Deploy: Create an AI hub and project, then deploy the GPT-4 model with appropriate settings in Azure AI Foundry.
  • Manual Evaluation: Test model responses by submitting queries and rating them for accuracy and relevance.
  • Automated Evaluation: Use datasets and metrics (e.g., coherence, fluency) to evaluate the model’s performance efficiently.
  • Analyze Results: Review and save evaluation results to identify areas for improvement.
  • Refinement: Use insights from evaluations to enhance model performance.
  • Clean Up: Delete resources to avoid unnecessary costs.
  • Share Learnings: Share your experience on LinkedIn and community platforms to showcase your skills.

By the end of the lab, you will have the skills to effectively evaluate generative AI models and optimize them for better results in real-world applications.

Lab 9: Explore AI Agent development

In this lab, you will delve into the development of AI agents capable of performing autonomous tasks. You will explore how to design, build, and deploy intelligent agents using AI techniques such as reinforcement learning, natural language processing (NLP), and decision-making algorithms.

The lab will guide you through the process of integrating AI models with agent frameworks, enabling the agent to learn from interactions and make informed decisions.

Explore AI Agent development, Source: K21Academy

Source: K21Academy

Key Takeaways from the Lab: 

  • Project Setup: Created an Azure AI Foundry project and set up an agent to support a specific use case (e.g., answering questions about an expenses policy).
  • Agent Creation: Built the AI agent, defining its role, behavior, and workflows using a model (e.g., GPT-4) and adding knowledge sources (like the expenses policy document).
  • Testing the Agent: Tested the agent’s performance by submitting various prompts to ensure it answers accurately, requests necessary information, and completes actions (e.g., generating expense claim text files).
  • Evaluation and Improvement: Continuously tested and fine-tuned the agent’s responses to improve performance in real-world scenarios.
  • Cleanup: Deactivated and deleted resources to avoid unnecessary costs.
  • Learning Sharing: Shared the results and learning outcomes on LinkedIn and the community to showcase skills.

By the end of the lab, you will have a solid understanding of AI agent development and the ability to create agents that can autonomously handle specific tasks in dynamic environments.

Lab 10: Develop an AI agent.

In this lab, you will learn how to develop a fully functional AI agent capable of interacting with its environment and making decisions autonomously. You will explore the key components of AI agent development, including perception, decision-making, and action execution.

The lab will guide you through building the agent, defining its goals, and implementing learning algorithms such as reinforcement learning or rule-based systems. Additionally, you will test the agent’s performance and optimize its behavior based on feedback.

Develop an AI agent, Source: K21Academy

Source: K21Academy

Key Takeaways from the Lab: 

  • Project Setup: Created a new Azure AI Foundry project for the AI agent.
  • Agent App: Built an agent client app for user interactions using pre-written code.
  • Agent Design: Developed agent behavior and decision-making logic.
  • Testing: Deployed and tested the agent for accurate response handling.
  • Code Implementation: Connected the agent to Azure services and handled data analysis.
  • Deployment & Cleanup: Deployed the app, tested it, and removed resources to save costs.
  • Sharing: Shared lab outcomes on LinkedIn and the community.

By the end of the lab, you will have hands-on experience in creating and deploying an intelligent agent that can carry out tasks with minimal human intervention.

Lab 11: Use a custom function in an AI agent

In this lab, you will learn how to enhance the capabilities of an AI agent by integrating custom functions. You will explore how to define and implement functions tailored to specific tasks or decision-making processes within the agent’s operation.

The lab will guide you through writing custom code that allows the agent to interact with its environment more effectively and perform specialized actions. Additionally, you’ll test the integration of these functions within the agent’s workflow and evaluate their impact on performance.

Key Takeaways from the Lab: 

  • Project Setup: Created an Azure AI Foundry project to develop the AI agent.
  • Custom Function: Defined a custom function to prioritize support tickets based on sentiment analysis.
  • Code Development: Cloned the code repository, configured settings, and wrote the function to handle tasks.
  • Agent Creation: Integrated the custom function into the agent and tested its ability to manage queries.
  • Testing & Deployment: Signed into Azure, ran the app, and evaluated its performance in a live environment.
  • Cleanup: Removed all resources after testing to avoid unnecessary costs.
  • Sharing: Shared outcomes on LinkedIn and within the community for professional engagement.

By the end of the lab, you will have the skills to customize AI agents with functions that extend their functionality for a wide range of applications.

Lab 12: Azure AI agent with the Semantic Kernel SDK 

In this lab, you will explore how to build and deploy an AI agent using the Semantic Kernel SDK in Azure. You will learn how to leverage Azure’s powerful AI tools, including the Semantic Kernel, to develop a sophisticated agent capable of processing and understanding complex data.

The lab will guide you through integrating the Semantic Kernel SDK with your agent, setting up workflows, and enhancing decision-making capabilities.

Azure AI agent with the Semantic Kernel SDK, Source: K21Academy

Source: K21Academy

Key Takeaways from the Lab: 

  • Project Setup: Created an Azure AI Foundry project and deployed the GPT-4 model for AI agent development.
  • Agent Client App: Developed an agent client app that allows user interaction with the AI model for automating tasks like patient query management.
  • Custom Functions: Integrated custom functions (e.g., handling patient queries, processing data) using the Semantic Kernel SDK to enhance the agent’s capabilities.
  • Code Development: Wrote the necessary code to define agent behaviors, including using the custom functions and interacting with the AI model.
  • Testing: Logged into Azure, ran the app, and tested the agent’s ability to handle queries and perform tasks based on custom functions.
  • Cleanup: Deleted unused resources to avoid ongoing costs and maintain a clean Azure environment.
  • Sharing: Shared outcomes on LinkedIn and within the community to showcase learning and gain professional visibility.

By the end of the lab, you will have hands-on experience in creating AI agents that can process semantic data and perform advanced tasks using Azure’s AI infrastructure.

Lab 13: Develop a multi-agent solution

In this lab, you will learn how to develop a multi-agent system where multiple AI agents work collaboratively or independently to solve complex tasks. You will explore how to design and implement agents that can communicate, share data, and coordinate efforts to achieve common or individual goals.

The lab will guide you through setting up communication protocols, handling synchronization between agents, and managing their interactions effectively.

Key Takeaways from the Lab: 

  • Created a multi-agent system for logistics, with agents managing inventory, suppliers, and delivery routes.
  • Deployed a GPT-4 model in Azure AI Foundry and developed a client app for agent interaction.
  • Implemented group chat for real-time agent collaboration and decision-making.
  • Defined selection and termination strategies for agent communication.
  • Tested and validated the system in Azure, optimizing the logistics process.
  • Cleaned up resources post-testing to avoid unnecessary costs.
  • Shared learnings on LinkedIn and the community to demonstrate skills.

By the end of the lab, you will have the skills to build a multi-agent solution that can solve more complex problems by leveraging the power of collaboration between autonomous agents.

Lab 14: Connect AI agents to tools using Model Context Protocol (MCP)

In this lab, you will learn how to connect AI agents to external tools and services using the Multi-Channel Platform (MCP). You will explore how to extend the capabilities of your AI agents by integrating them with various tools and APIs, enabling them to perform more advanced tasks.

The lab will guide you through setting up MCP, defining communication channels, and configuring agents to interact with tools such as databases, APIs, or external software systems.

Connect AI agents to tools using MCP, Source: K21Academy

Source: K21Academy

Key Takeaways from the Lab: 

  • Created an Azure AI Foundry project to develop an intelligent AI agent using MCP.
  • Integrated MCP function tools to enable real-time communication and collaboration between AI agents and external tools.
  • Developed agents for specific tasks like resource optimization and cloud management.
  • Configured the application settings and connected AI agents to a remote MCP server for tool communication.
  • Tested the agent’s functionality by running the app in Azure and ensured smooth communication with MCP tools.
  • Cleaned up resources post-testing to avoid unnecessary costs.
  • Shared findings and progress on LinkedIn and within the community for visibility and networking.

By the end of the lab, you will have hands-on experience in enhancing AI agents with external tool integrations, allowing them to perform a broader range of functions in real-world applications.

Lab 15: Analyze Text with Azure AI Search

In this lab, you will learn how to leverage Azure AI Search to analyze and extract valuable insights from text data. You will explore how to integrate Azure AI Search with your datasets, perform text indexing, and implement advanced search capabilities to analyze large volumes of unstructured text.

The lab will guide you through configuring custom analyzers, defining search queries, and using natural language processing (NLP) techniques to enhance search results.

Analyze Text with Azure AI Search, Source: K21Academy

Source: K21Academy

Key Takeaways from the Lab: 

  • Provisioned Azure Language Resource: Set up a language service resource in Azure to access language processing features.
  • Developed Application: Used Visual Studio Code to develop a Python app for text analysis.
  • Configured Azure SDK: Integrated Azure AI-Language SDK and connected it to the Azure Language resource.
  • Language Detection: Implemented functionality to automatically detect the language of hotel reviews.
  • Sentiment Analysis: Analyzed the sentiment (positive, neutral, negative) of each review.
  • Key Phrase Extraction: Extracted key phrases to identify main topics within the reviews.
  • Entity Recognition: Identified named entities like places, landmarks, and people mentioned in the reviews.
  • Linked Entity Extraction: Linked recognized entities to external sources like Wikipedia.
  • Cleaned Up Resources: Deleted Azure resources after completing the lab to avoid unnecessary charges.

By the end of the lab, you will have the skills to use Azure AI Search to efficiently process and analyze text data for improved search accuracy and actionable insights.

Lab 16: Create a Question Answering Solution 

In this lab, you will learn how to build a question-answering (QA) system using AI technologies. You will explore how to design and implement a solution that can automatically answer user queries based on a given dataset or knowledge base.

The lab will guide you through setting up a question-answering model, training it on your data, and integrating natural language processing (NLP) techniques to improve the accuracy and relevance of the answers.

Create a Question Answering Solution, Source: K21Academy

Source: K21Academy

Key Takeaways from the Lab: 

  • Provisioned Azure Language Resource: Set up an Azure AI-Language service to enable question-answering capabilities.
  • Created QnA Project: Built a question-answering project in Azure Language Studio and added relevant data from documents and URLs to create a knowledge base.
  • Curated Knowledge Base: Added, edited, and customized Q&A pairs to improve the accuracy and relevance of responses.
  • Trained and Tested the Solution: Trained the knowledge base and tested it by submitting queries to validate its performance and response quality.
  • Deployed Knowledge Base: Published the project for integration with applications, providing real-time responses to user queries.
  • Configured and Developed Application: Set up the development environment in Visual Studio Code, added necessary code to integrate the QnA service, and implemented querying functionality.
  • Cleaned Up Resources: Deleted the Azure resources after testing to avoid ongoing costs and maintain a clean environment.

By the end of the lab, you will have hands-on experience in creating a QA solution that can understand and respond to user queries effectively in various real-world scenarios.

Lab 17: Create a Language Understanding Model 

In this lab, you will learn how to create a language understanding model capable of interpreting and processing natural language. You will explore how to train the model to understand different language constructs, such as intents, entities, and context, using data-driven approaches.

The lab will guide you through setting up and fine-tuning a language model, building custom language understanding pipelines, and applying natural language processing (NLP) techniques for improved performance.

Create a Language Understanding Model, Source: K21Academy

Source: K21Academy

Key Takeaways from the Lab: 

  • Provisioned Azure AI-Language Resource to enable language understanding.
  • Created Conversational Language Project in Azure AI Language Studio.
  • Defined Intents like GetTime, GetDay, and GetDate to categorize queries.
  • Labeled Sample Utterances for each intent to train the model.
  • Trained and Tested the Model using labeled data and sample queries.
  • Added Entities to improve model’s ability to extract specific data.
  • Retrained and Deployed the model for real-time usage.
  • Integrated with Application using Azure SDK for query processing.
  • Cleaned Up Resources to avoid unnecessary charges.

By the end of the lab, you will have hands-on experience in developing a language understanding model that can effectively interpret and respond to complex user inputs in various applications.

Lab 18: Custom Text Classification using AI Language

In this lab, you will learn how to build a custom text classification model using AI language models. You will explore how to prepare your text data, define custom categories or labels, and train a model to classify the text into the appropriate categories.

The lab will guide you through using AI-powered tools to enhance the accuracy of classification, including data preprocessing, feature extraction, and model evaluation.

Custom Text Classification using AI Language, Source: K21Academy

Source: K21Academy

Key Takeaways from the Lab: 

  • Provisioned Azure AI-Language Resource to enable custom text classification.
  • Uploaded Sample Articles for training the classification model.
  • Created a Custom Text Classification Project to organize and manage text categorization.
  • Labeled Data by tagging sample documents with predefined categories (e.g., Sports, News).
  • Trained a Model on labeled data to create a custom classification solution.
  • Evaluated the Model to ensure its accuracy and effectiveness using test data.
  • Deployed Model for real-time use in applications, enabling text classification.
  • Developed an Application in Visual Studio Code to integrate and test the classification model.
  • Configured the Application to connect to the Azure AI-Language service and handle text classification.
  • Tested the Application to ensure proper classification of documents and performance validation.
  • Cleaned up Azure Resources to avoid unnecessary charges.

By the end of the lab, you will have the skills to develop and deploy a custom text classification model that can be applied to real-world tasks such as sentiment analysis, spam detection, and topic categorization.

Lab 19: Extract Custom Entities

In this lab, you will learn how to extract custom entities from unstructured text data using AI-powered natural language processing (NLP) techniques. You will explore how to define and identify specific entities, such as names, dates, locations, or product types, based on the context of your dataset.

The lab will guide you through training a custom named entity recognition (NER) model, evaluating its performance, and fine-tuning it for greater accuracy.

Extract Custom Entities, Source: K21Academy

Source: K21Academy

Key Takeaways from the Lab: 

  • Provisioned AI Resource: Set up custom entity recognition for classified ads.
  • Uploaded Data: Stored sample ads in Azure Blob Storage.
  • Created NER Project: Built a custom Named Entity Recognition model.
  • Labeled Data: Tagged ads with entities like ItemForSale, Price, and Location.
  • Trained Model: Trained the model using labeled data.
  • Evaluated Model: Tested accuracy and performance.
  • Deployed Model: Published for real-time entity extraction.
  • Developed App: Built a console app to extract entities.
  • Tested App: Validated entity extraction from ads.
  • Cleaned Up: Deleted resources to avoid additional costs.

By the end of the lab, you will have the expertise to extract relevant entities from text data for use in applications such as information retrieval, knowledge extraction, and data organization.

Start Your AI-102 Certification Journey Today

Lab 20: Translate Text with the Azure AI Translator

In this lab, you will learn how to use the Azure AI Translator to translate text between multiple languages. You will explore how to integrate the Azure Translator API into your application, configure language detection, and implement real-time translation features.

The lab will guide you through using advanced capabilities such as batch translation, custom translation models, and language identification.

Translate Text with the Azure AI Translator, Source: K21Academy

Source: K21Academy

Key Takeaways from the Lab: 

  • Provisioned Azure AI Translator Resource: Set up the Azure Translator service for translating text.
  • Prepared Development Environment: Configured Visual Studio Code for app development.
  • Configured Application: Connected app to Azure AI Translator service for seamless text translation.
  • Added Code for Translation: Implemented code to send and receive translated text.
  • Tested Application: Verified that the translation feature works correctly.
  • Cleaned Up Resources: Deleted resources to avoid unnecessary charges.

By the end of the lab, you will have the skills to incorporate Azure AI Translator into your projects, enabling seamless multilingual communication and improving global reach.

Lab 21: Create Speech-enabled App: Azure AI Speech 

In this lab, you will learn how to integrate speech recognition and synthesis into an application using Azure AI Speech services. You will explore how to convert spoken language into text (speech-to-text) and generate natural-sounding speech from text (text-to-speech).

The lab will guide you through setting up the Azure Speech SDK, handling different speech models, and implementing real-time speech features such as voice commands or interactive dialogues.

Create Speech-enabled App: Azure AI Speech, Source: K21Academy

Source: K21Academy

Key Takeaways from the Lab: 

  • Provisioned Azure AI Speech Resource: Set up the necessary Azure resource to support speech recognition and synthesis.
  • Configured Development Environment: Prepared Visual Studio Code for development and installed the required Speech SDK.
  • Integrated Azure AI Speech SDK: Added code to recognize and transcribe speech input into text.
  • Handled Microphone and Audio Input: Configured the system to use a microphone or audio file for speech input.
  • Processed Transcribed Commands: Implemented logic to interpret transcribed text commands for application functionality.
  • Implemented Speech Synthesis: Enabled text-to-speech conversion, including voice customization and SSML for fine control over speech output.
  • Cleaned Up Resources: Deleted resources after the lab to avoid additional charges.

By the end of the lab, you will be able to build a speech-enabled application that enhances user experience through voice interaction and accessibility.

Lab 22: Translate speech with Azure Speech Resource

In this lab, you will learn how to use the Azure Speech Resource to translate spoken language in real-time. You will explore how to configure speech-to-text and speech-to-speech translation capabilities, enabling seamless multilingual communication. The lab will guide you through setting up the Azure Speech SDK, handling different language models, and implementing live translation features for applications such as virtual assistants, real-time transcription, and multilingual support.

Translate speech with Azure Speech Resource, Source: K21Academy

Source: K21Academy

Key Takeaways from the Lab: 

  • Provisioned Azure Speech Resource: Set up an Azure Speech resource for speech recognition, translation, and synthesis.
  • Configured Development Environment: Prepared Visual Studio Code and Azure environment for building speech-enabled applications.
  • Integrated Speech Translation: Enabled real-time speech translation, converting spoken language into text and translating it into another language.
  • Speech Recognition: Implemented speech-to-text functionality, transcribing spoken input into text using Azure’s Speech-to-Text API.
  • Text-to-Speech: Added speech synthesis capabilities to convert translated text into spoken language, enhancing user interaction.
  • SSML Customization: Used Speech Synthesis Markup Language (SSML) for detailed control over speech output, including voice and pauses.
  • Developed Functional App: Built and tested an app that integrates speech recognition and synthesis for seamless real-time language translation.
  • Cleaned Up Resources: Deleted Azure resources to prevent unnecessary charges after the lab.

By the end of the lab, you will have the skills to integrate speech translation into your applications, allowing for efficient and accurate voice communication across languages.

Lab 23: Develop an audio-enabled Chat App

In this lab, you will learn how to build an audio-enabled chat application using AI and speech technologies. You will integrate speech-to-text and text-to-speech capabilities into the chat app, allowing users to interact through both voice and text.

The lab will guide you through setting up real-time audio processing, converting spoken language into text for the chat interface, and using text-to-speech for generating voice responses.

Develop an audio-enabled Chat App, Source: K21Academy

Source: K21Academy

Key Takeaways from the Lab: 

  • Azure AI Project Setup: Established an Azure AI Project and deployed a multimodal AI model (Phi-4-multimodal-instruct) to process and respond to both text and audio inputs.
  • Client Application Development: Built a Python client application that connects to the Azure AI Project using the Azure AI SDK to interact with the deployed model.
  • Audio Prompt Submission: Implemented functionality to submit audio data via URLs, allowing the app to process voice messages and generate intelligent responses.
  • AI Chat Client Integration: Integrated a chat client with the deployed multimodal model to enable interactions via both text and audio inputs.
  • Speech-to-Text & Multimodal AI: Enabled speech recognition for processing voice messages and used generative AI to respond to prompts, summarizing or answering queries.
  • Functional Application: Developed a working audio-enabled chat app that can interact with users through voice and text, automating responses and providing valuable insights from audio input.
  • Resource Cleanup: Deleted resources from the Azure portal after testing to prevent unnecessary costs.

By the end of the lab, you will have created a fully functional chat app with seamless voice communication features, enhancing the user experience with hands-free interaction.

Lab 24: Explore the Voice Live API

In this lab, you will explore the Voice Live API to integrate real-time voice capabilities into your applications. You will learn how to set up the API for voice interactions, including features like live speech-to-text, text-to-speech, and voice recognition.

The lab will guide you through using the Voice Live API to build applications that support interactive voice commands, voice search, and real-time voice translation.

Explore the Voice Live API, Source: K21Academy

Source: K21Academy

Key Takeaways from Lab: 

  • Real-Time Speech Interaction: Learn how to integrate the Azure AI Voice Live API to capture, transcribe, and process voice commands in real time.
  • Speech-to-Text Conversion: Use Azure’s Speech-to-Text capability to convert spoken queries into text, enabling seamless voice-driven applications.
  • Conversational AI Integration: Combine the Voice Live API with conversational models to understand user intent and provide relevant, context-aware responses.
  • Customizing Voice Agents: Explore how to configure and customize voice agents with different voices and avatars for a more personalized user experience.
  • Voice-Enabled Application Development: Build a fully functional voice assistant capable of handling queries like order tracking or account management through natural voice interaction.
  • End-to-End Pipeline: Set up an end-to-end real-time speech pipeline that processes audio input, recognizes speech, and generates responses instantly using text-to-speech.
  • Resource Management: After testing, clean up resources to avoid unnecessary costs and keep your development environment efficient.

By the end of the lab, you will have hands-on experience in utilizing the Voice Live API to enhance your applications with live voice interaction features.

Lab 25: Analyze images with Azure AI Vision

In this lab, you will learn how to analyze and extract insights from images using Azure AI Vision services. You will explore how to use computer vision techniques for tasks like object detection, image classification, optical character recognition (OCR), and scene understanding.

The lab will guide you through setting up the Azure AI Vision API, processing images, and extracting valuable data such as text from images or identifying objects within a scene.

Analyze images with Azure AI Vision, Source: K21Academy

Source: K21Academy

Key Takeaways from Lab: 

  • Provision Azure AI Vision Resource: Learn how to set up an Azure AI Vision resource to analyze images using Azure Cognitive Services.
  • Develop Image Analysis Application: Use the Azure AI Vision SDK to build an application capable of analyzing images and generating captions, tags, and detecting objects and people.
  • Generate Image Captions: Automatically generate human-readable captions based on the contents of an image using Azure AI Vision.
  • Extract Tags from Images: Identify relevant tags for images, helping to categorize and understand the image content more easily.
  • Object Detection: Detect and locate objects in images, such as vehicles or buildings, and annotate them for better analysis and visualization.
  • People Detection: Identify and locate people in images, which is particularly useful for surveillance and security applications.
  • Clean Up Resources: After completing the analysis, remove unnecessary resources to manage costs and maintain an organized environment.

By the end of the lab, you will have the skills to integrate Azure AI Vision into your applications for powerful image analysis capabilities.

Lab 26: Read Text in Images using Azure AI Vision SDK

In this lab, you will learn how to use the Azure AI Vision SDK to read and extract text from images. You will explore how to integrate the SDK into your application to process images with embedded text, leveraging Optical Character Recognition (OCR) to convert the text into a machine-readable format.

The lab will guide you through setting up the Azure AI Vision SDK, handling different image formats, and optimizing OCR results for accuracy.

Read text in images, Source: K21Academy

Source: K21Academy

Key Takeaways from Lab: 

  • Provision Azure AI Services Resource: Set up an Azure AI Vision resource for text extraction from images using OCR technology.
  • Develop Text Extraction Application: Use the Azure AI Vision SDK to build an application that extracts printed and handwritten text from images accurately.
  • Text Extraction from Images: Leverage OCR technology to convert image-based text into machine-readable format, making it actionable for further processing.
  • Identify Word Locations: Extract and display individual words from images, including their positions and confidence levels for accurate analysis.
  • Annotate Extracted Text: Use bounding polygons to annotate the detected text, providing a visual representation of text locations in the image.
  • Clean Up Resources: After completing the analysis, remove any unused resources to avoid unnecessary charges and maintain a clean environment.
  • Enhance Applications: Integrate text recognition capabilities into various applications, such as document processing, form extraction, or customer interaction automation.

By the end of the lab, you will have the skills to implement text extraction from images, enabling automated processing of scanned documents, receipts, and other image-based text data.

Lab 27: Detect and Analyze Faces

In this lab, you will learn how to detect and analyze faces in images using Azure AI Vision services. You will explore how to utilize facial recognition technology to identify key features such as age, gender, emotions, and facial landmarks. The lab will guide you through setting up the Azure AI Vision SDK, processing images for face detection, and analyzing the attributes of the detected faces.

Detect and Analyze Faces, Source: K21Academy

Source: K21Academy

Key Takeaways from Lab: 

  • Provision Azure AI Face API Resource: Learn to create and configure an Azure AI Face API resource for facial recognition and analysis.
  • Develop a Facial Analysis Application: Use the Azure Face SDK to build an app that can detect and analyze faces in images, extracting attributes like head pose, occlusion, and accessories.
  • Face Detection: Implement face detection algorithms to accurately locate human faces in images, even under challenging conditions like varying lighting or occlusions.
  • Face Attribute Analysis: Extract and analyze facial attributes, such as emotions, age, and accessories, to enhance user experience and provide insights for various applications.
  • Face Identification: Identify known individuals by comparing detected faces to a database, suitable for use cases like access control or security.
  • Image Annotation: Annotate detected faces in images with bounding boxes, visualizing the results of face detection and analysis.
  • Clean Up Resources: After completing the lab, delete unused resources to maintain an organized environment and avoid unnecessary charges.

By the end of the lab, you will be able to integrate face detection and analysis into your applications, enabling features like personalized user experiences, security, and emotion detection.

Lab 28: Classify images with AI Vision custom vision

In this lab, you will learn how to use Azure AI Vision’s Custom Vision service to create and train custom image classification models. You will explore how to upload and label your own dataset of images, train the model to recognize specific objects or categories, and evaluate its performance.

The lab will guide you through setting up the Custom Vision SDK, deploying the trained model, and integrating it into your applications for real-time image classification.

Classify images with AI Vision custom vision, Source: K21Academy

Source: K21Academy

Key Takeaways from Lab: 

  • Provision Azure AI Custom Vision Resource: Set up the Custom Vision resource to train and deploy image classification models for specific tasks, such as recognizing different types of fruits.
  • Create a Custom Vision Project: Build a new project in the Custom Vision portal, select classification types, and upload labeled training images to train your model.
  • Train Custom Image Classification Model: Train a model using the uploaded images (e.g., apple, banana, orange) to classify and identify these fruits with high accuracy.
  • Test the Model: Evaluate the trained model’s performance by testing it with new images, and review the precision, recall, and other metrics to measure the model’s accuracy.
  • Publish the Model: Once the model is trained and tested, publish it for prediction purposes, enabling the model to classify real-world images in your client applications.
  • Integrate the Classifier into an Application: Use the trained and published model in a client application to classify images in real-time, providing users with relevant information about the identified fruits.
  • Clean Up Resources: After completing the lab, delete the Azure resources to avoid unnecessary costs and maintain a tidy environment.

By the end of the lab, you will have the skills to build a custom image classification solution tailored to your specific needs, enabling more accurate and context-specific image analysis.

Lab 29: Detect Objects in Images with Custom Vision 

In this lab, you will learn how to use Azure AI’s Custom Vision service to detect objects within images. You will explore how to train a custom object detection model by uploading and labeling images with specific objects.

The lab will guide you through the process of setting up the Custom Vision service, training the model to recognize multiple objects, and evaluating its performance. You will also learn how to deploy the model to detect objects in real-time images.

Detect Objects in Images with Custom Vision, Source: K21Academy

Source: K21Academy

Key Takeaways from Lab: 

  • Set Up Resources: Create Azure Custom Vision resources for training and prediction.
  • Train Model: Upload, tag, and train the model on fruit images (apple, banana, orange).
  • Publish Model: Make the trained model available for use in applications.
  • Deploy Model: Integrate the model into a client app for real-time object detection.
  • Test & Optimize: Validate the model’s performance and refine it.
  • Clean Up: Delete resources to avoid unnecessary costs.

By the end of the lab, you will have the skills to create custom object detection models tailored to your specific needs, enabling advanced image analysis in your applications.

Lab 30: Analyze video with Video Indexer

In this lab, you will learn how to use Azure Video Indexer to analyze and extract valuable insights from videos. You will explore how to upload video files, automatically transcribe speech, identify faces, detect objects, and recognize emotions within the video content.

The lab will guide you through the process of setting up Video Indexer, configuring analysis settings, and interpreting the extracted metadata for further use in applications.

Analyze video With Video Indexer, Source: K21Academy

Source: K21Academy

Key Takeaways from Lab: 

  • Set Up Resources: Clone the repository and create Video Indexer resources in Azure.
  • Upload & Analyze Video: Upload a video to Video Indexer for AI-powered analysis (e.g., speech-to-text, object detection).
  • Review Insights: Explore insights like transcriptions, key frames, faces, emotions, and objects detected in the video.
  • Search Insights: Use the search functionality to locate specific insights in the video.
  • Embed Widgets: Learn to embed Video Indexer widgets for real-time insights display.
  • REST API: Use the Video Indexer REST API for automating video analysis and custom integration.
  • Clean Up: No resources need deletion in this lab, but keep the environment tidy.

By the end of the lab, you will have the skills to integrate video analysis capabilities into your solutions, enabling rich, data-driven insights from video content.

Lab 31: Develop a vision-enabled chat app

In this lab, you will learn how to build a vision-enabled chat application that integrates image recognition capabilities. You will explore how to use Azure AI Vision services to analyze images sent by users in the chat and provide context-aware responses based on the visual content.

The lab will guide you through setting up a chat interface, integrating image analysis features like object detection or text recognition, and creating interactive responses based on the images.

Develop a vision-enabled chat app, Source: K21Academy

Source: K21Academy

Key Takeaways from Lab: 

  • Azure Setup: Provisioned an Azure OpenAI resource and AI Foundry for multimodal AI model deployment.
  • Developed Chat Interface: Built a chat app that accepts text and image inputs.
  • Image Analysis Integration: Integrated image analysis logic using Azure Vision services (e.g., recognizing fruits).
  • Multimodal Interaction: Tested the chat app’s capability to handle both text and image prompts, providing context-aware responses.
  • Clean-Up: Deleted all resources to prevent unnecessary charges.

By the end of the lab, you will have developed a chat application that enhances user interaction through vision-enabled functionalities, providing intelligent insights based on images shared within the conversation.

Lab 32: Generate images with AI

In this lab, you will learn how to generate images using AI models. You will explore how to use Generative Adversarial Networks (GANs) or other AI-based tools to create realistic and creative images from scratch or based on input parameters.

The lab will guide you through selecting the right model, training or fine-tuning it, and generating images that match specific descriptions or styles.

Generate images with AI, Source: K21Academy

Source: K21Academy

Key Takeaways from Lab: 

  • Azure OpenAI Setup: Provisioned the Azure OpenAI resource for access to the DALL-E model to generate images based on text prompts.
  • Development Environment: Set up Visual Studio Code with the necessary dependencies for integrating image generation capabilities into the app.
  • Image Generation: Used the DALL-E model to generate images from natural language descriptions, testing different prompts.
  • Client Application: Built a client app to send text prompts, retrieve generated images, and display them within the application.
  • Cost Management: Managed costs by utilizing Azure’s pricing models, including the free tier for small-scale image generation.
  • Clean-Up: Deleted resources post-lab to avoid incurring unnecessary charges.

By the end of the lab, you will have the skills to generate high-quality images for various applications such as creative content creation, design, and more, using AI-driven tools.

Lab 33: Extract information from multimodal content

In this lab, you will learn how to extract valuable information from multimodal content, such as combining text, images, audio, and video. You will explore how to use AI tools to process and analyze various forms of data simultaneously, extracting insights like text from images (OCR), speech from audio, and object recognition from videos.

The lab will guide you through integrating different AI services, such as Azure AI Vision and Speech APIs, to create a unified solution for processing multimodal content.

Extract information from multimodal content, Source: K21Academy

Source: K21Academy

Key Takeaways from Lab: 

  • Setup: Configured Azure AI Document Intelligence to extract structured data from documents, images, audio, and video.
  • Schema Creation: Defined schemas for content types (invoices, slides, voicemail, and video).
  • Extraction: Built analyzers to extract relevant data (e.g., dates, addresses, and tasks) from various document formats.
  • Cost: Managed costs based on usage (e.g., Speech-to-Text services).
  • Cleanup: Cleaned up resources after completion to avoid unnecessary charges.

By the end of the lab, you will have the skills to extract actionable information from a variety of media types, enabling more sophisticated and data-rich applications.

Lab 34: Develop a Content Understanding client app.

In this lab, you will learn how to develop a client application that leverages AI-driven content understanding capabilities. You will explore how to integrate natural language processing (NLP), image recognition, and other AI services to allow your app to comprehend and interpret various types of content, such as text, images, and videos.

The lab will guide you through building an intuitive interface, processing content, and utilizing AI models to derive meaningful insights.

Develop a Content Understanding client app, Source: K21Academy

Source: K21Academy

Key Takeaways from Lab: 

  • Setup: Provisioned Azure AI Document Intelligence for analyzing documents (e.g., invoices, contracts) using the REST API.
  • API Integration: Developed a client application to authenticate, send documents, and receive structured data (text, key-value pairs, tables).
  • Schema Creation: Defined custom schemas for business card and invoice data extraction.
  • Response Handling: Implemented code to parse the JSON response and extract meaningful information.
  • Testing: Ran tests on documents (e.g., business cards) to validate data extraction.
  • Cost: Minimal cost for API usage, with charges for processing API requests and data storage.
  • Cleanup: Deleted Azure resources to avoid unnecessary charges.

By the end of the lab, you will have a content understanding client app that can intelligently analyze and process diverse content, providing users with valuable insights and enhancing their experience.

Lab 35: Analyze forms with prebuilt Azure AI Document

In this lab, you will learn how to use Azure AI Document Services to analyze and extract data from forms automatically. You will explore how to integrate the prebuilt models for form recognition, which can identify key fields such as text, tables, and checkboxes within scanned or digital form images.

The lab will guide you through setting up the Azure AI Document API, processing different types of forms, and extracting structured data for further use in your applications.

Analyze forms with prebuilt Azure AI Document, Source: K21Academy

Source: K21Academy

Key Takeaways from Lab: 

  • Setup: Provisioned Azure AI Document Intelligence to analyze forms like invoices and claims using prebuilt models.
  • API Integration: Developed a client app using Azure SDK to send form data, process it, and retrieve structured information (e.g., claim numbers, customer names).
  • Form Analysis: Utilized the Read model for OCR to extract text and details from multilingual documents.
  • Schema Definition: Configured schemas for extracting specific fields (e.g., VendorName, InvoiceTotal).
  • Response Handling: Parsed JSON responses to extract structured data and confidence scores for efficient document processing.
  • Testing: Ran tests on sample forms to ensure accurate data extraction.
  • Cost: Minimal cost for API calls, with pricing based on the number of pages processed.
  • Cleanup: Deleted Azure resources after completing the lab to avoid additional charges.

By the end of the lab, you will have the skills to efficiently process and extract information from forms, streamlining document management and data entry tasks.

Lab 36: Analyze forms with custom Azure AI Document

In this lab, you will learn how to create and deploy a custom form analysis solution using Azure AI Document services. You will explore how to train a custom model to recognize and extract data from forms that may not be supported by prebuilt models.

The lab will guide you through the process of labeling your data, training the custom model, and fine-tuning it for your specific use case. Additionally, you will learn how to integrate the custom model into your applications for automated form processing.

Analyze forms with custom Azure AI Document, Source: K21Academy

Source: K21Academy

Key Takeaways from Lab: 

  • Setup: Provisioned Azure AI Document Intelligence for custom document analysis, enabling tailored extraction from diverse forms.
  • Training Data: Uploaded sample forms (e.g., patient intake forms) into Azure Blob Storage for model training.
  • Model Creation: Used Azure AI Document Intelligence Studio to label and train a custom model for specific form layouts and data points.
  • Testing & Evaluation: Evaluated the trained model by analyzing sample documents and verifying extracted fields like patient names, insurance numbers, etc.
  • App Development: Built a client app in Azure Cloud Shell to send documents to the trained model and receive structured data.
  • Deployment: Deployed the custom model to Azure for real-time use via API endpoint.
  • Cost: Based on usage, with custom extraction models priced at $0.03 per page.
  • Cleanup: Deleted resources after testing to avoid extra costs.

By the end of the lab, you will have the skills to build a tailored form recognition solution that meets your unique data extraction requirements.

Lab 37: Create a knowledge mining solution

In this lab, you will learn how to build a knowledge mining solution using Azure AI services. You will explore how to ingest and process large volumes of unstructured data, such as documents, images, and other content types, and extract meaningful insights. The lab will guide you through using Azure Cognitive Search, AI-powered text analysis, and document processing to create a knowledge mining pipeline.

You will also learn how to enrich the data, index it, and implement powerful search and query capabilities.

Create an knowledge mining solution, Source: K21Academy

Source: K21Academy

Key Takeaways from Lab: 

  • Setup: Created Azure AI Search and Blob Storage resources to support a knowledge mining solution for legal research documents.
  • Document Upload: Uploaded scanned legal documents (e.g., case files, contracts) to Azure Blob Storage for indexing.
  • Indexer Configuration: Set up an indexer in Azure AI Search to extract content from the documents and apply cognitive enrichments like OCR, key phrase extraction, and entity recognition.
  • Content Enrichment: Applied AI skills to enhance document understanding and make the content more searchable (e.g., extracting case outcomes, involved parties).
  • Search & Explore: Tested the system by running queries in the Search Explorer, validating that the indexed data was accurate and actionable.
  • App Development: Developed a client app to interact with the enriched search index via SDK (Python or C#).
  • Cost: Costs involved are based on the amount of data processed and stored (e.g., $1.23 per month for storage).
  • Cleanup: Deleted resources after testing to avoid extra charges.

By the end of the lab, you will have the skills to develop a knowledge mining solution that enables you to discover hidden insights and make data-driven decisions across a wide range of content.

Lab 38: Using Advanced Prompts Techniques

In this lab, you will dive into advanced prompt engineering using foundational models in Azure AI Studio. The goal is to enhance text generation capabilities by experimenting with techniques like Zero-Shot, One-Shot, Few-Shot, and Chain of Thought. You will work through practical scenarios, including summarization, multilingual translation, and reasoning-based tasks, to explore the impact of these advanced prompts on model performance.

Foundaion models

Key Takeaways from Lab: 

  • Zero-Shot Prompting: Task performance without prior examples, relying on the model’s existing knowledge.
  • One-Shot Prompting: Provides one example to guide the model’s task performance.
  • Few-Shot Prompting: Enhances accuracy by giving a few examples to the model.
  • Chain of Thought Prompting: Breaks down complex tasks into logical steps for clearer reasoning.
  • Prompt Optimization: Refines prompts for better model performance in real-world tasks.
  • Azure AI Studio: Hands-on practice using Azure’s Playground to implement and test advanced prompt techniques.
  • Advanced Techniques: Explored techniques like Role-Playing, Meta-Prompting, and Instruction Prompting for better control over model responses.
  • Real-World Applications: Applied techniques to create solutions like AI chatbots and decision-making systems.
  • Sharing Learnings: Encouraged sharing insights on LinkedIn and within the community for professional growth.

By the end of this lab, you’ll have hands-on experience in designing, testing, and optimizing prompts, enabling you to develop more effective text generation models for a wide range of applications, from chatbots to complex decision-making systems.

Lab 39: Develop a multimodal generative AI app

In this hands-on lab, you will learn how to build a multimodal generative AI app using Azure AI Studio. The goal is to develop a solution that can analyze and generate responses based on text, image, and audio inputs. You will work with a pre-trained multimodal model to understand and respond to diverse content, enabling you to automate content tagging and improve searchability across different media types.

Multimodal AI

Key Takeaways from Lab: 

  • Multimodal AI Overview: Learn the basics of multimodal models, combining text, image, and audio inputs for more context-rich AI applications.
  • Azure AI Foundry Setup: Create and configure an Azure AI Foundry project to deploy multimodal models for content analysis.
  • Pre-trained Models: Deploy and work with the Phi-4-multimodal-instruct model to process diverse input types such as text, images, and audio.
  • Client Application Development: Develop a client application that interacts with your multimodal model, using Python or C# to send and receive prompts.
  • Text, Image, and Audio Prompts: Implement text-based, image-based, and audio-based prompts to interact with your model, enabling rich user interactions.
  • Testing and Evaluation: Evaluate the model’s responses across different modalities to understand the model’s performance and fine-tune as needed.
  • Deployment: Deploy the multimodal model and make it accessible via real-time AI model inference, ensuring it is ready for live application.
  • Resource Management: Learn the importance of cleaning up Azure resources after use to avoid unnecessary costs and maintain an efficient development environment.

By the end of this lab, you will be able to integrate multimodal AI into applications such as grocery assistants, customer support bots, and more.

Lab 40: Use prompt flow for NER in the AI Foundry portal

In this hands-on lab, you will learn how to perform Named Entity Recognition (NER) using Prompt Flow within the Azure AI Foundry portal. NER is a key technique in Natural Language Processing (NLP) that extracts and classifies specific entities (like names, locations, and dates) from text. The lab will guide you through setting up a flow that integrates a GPT model to identify these entities, enhancing the capabilities of a customer support chatbot by understanding domain-specific terms and customer sentiment.

 

Named entity Recognition

By the end, you will have hands-on experience in designing and deploying a functional NER workflow using Azure AI Foundry.

Key Takeaways from the Lab:

  • Understanding NER: Learn how to extract and classify important entities (e.g., names, dates, locations) from text using NER, a critical component in many NLP applications.
  • Azure AI Foundry Setup: Create a project and deploy a GPT model within Azure AI Foundry to perform NER tasks efficiently.
  • Building a Prompt Flow: Design and implement a flow in Azure AI Foundry to process text inputs, configure model nodes, and extract entities using a pre-trained GPT model.
  • Configuring the LLM Node: Customize the GPT model’s prompt for entity extraction and fine-tune it for domain-specific queries like customer support interactions.
  • Python Node for Output Cleaning: Use a Python node to clean and process the extracted entities, ensuring they are correctly formatted for use in applications.
  • Running and Testing the Flow: Test your flow with sample text inputs to verify that entities are correctly identified and extracted.
  • Optimizing Chatbot Performance: Enhance a customer-facing chatbot by integrating NER to improve its responsiveness and context-awareness for product-related queries.
  • Resource Management: Learn how to manage and delete Azure resources after completing the lab to avoid unnecessary costs.

Lab 41: Add your data for RAG with Azure OpenAI Service

In this hands-on lab, you will learn how to implement Retrieval-Augmented Generation (RAG) using Azure OpenAI Service to enhance customer support experiences. The goal is to integrate Azure OpenAI Service with Azure Cognitive Search to retrieve relevant documents from a large database and generate contextually relevant, accurate responses. You will upload proprietary data to Azure Blob Storage, index it using Azure Cognitive Search, and deploy AI models for RAG, enabling the generation of responses grounded in real-time, domain-specific knowledge.

RAG

By the end of this lab, you’ll be able to develop a system that utilizes RAG to provide precise, data-driven answers to user queries.

Key Takeaways from the Lab:

  • Understanding RAG: Learn how Retrieval-Augmented Generation (RAG) combines the generative power of LLMs with the precision of retrieval-based systems to produce contextually accurate and relevant responses.
  • Azure OpenAI Service: Gain hands-on experience in integrating Azure OpenAI Service with retrieval-based systems to enhance AI-generated responses with domain-specific data.
  • Azure Cognitive Search: Learn how to use Azure Cognitive Search to index large datasets, making it easy to retrieve relevant information during the AI model’s response generation.
  • Data Upload and Indexing: Upload data to Azure Blob Storage and index it with Azure Cognitive Search, allowing the model to access the most relevant information for response generation.
  • Model Deployment for RAG: Deploy the text-embedding-ada-002 and GPT-35-turbo models to enable effective RAG implementation, combining the retrieved data with generative outputs for more informed answers.
  • Application Configuration: Configure an application in Visual Studio Code to integrate Azure OpenAI Service with Azure Cognitive Search, enabling the system to retrieve and generate responses based on indexed data.
  • Testing the System: Test the system by running the application, validating its ability to generate responses based on the contextually relevant data retrieved from Azure Cognitive Search.
  • Resource Management: Learn the importance of cleaning up Azure resources after completing the lab to avoid unnecessary costs and maintain a tidy development environment.

Frequently Asked Questions

Q1) What are the key skills I need to develop to pass the Azure AI-102 certification exam?

Ans: You need to focus on deploying AI services, securing and monitoring AI resources, developing custom models, and integrating Azure AI solutions into applications.

Q2) How important are hands-on labs for the Azure AI-102 exam preparation?

Ans: Hands-on labs are crucial as they provide practical experience in implementing AI solutions, which is essential for understanding exam concepts and performing well.

Q3) Can I complete the labs in a different order, or should I follow the sequence provided?

While it’s possible to complete the labs in a different order, following the recommended sequence helps build foundational knowledge before tackling more complex tasks.

Q4) What if I don’t have access to Azure resources for the labs?

Ans: You can use a free Azure account or explore lab environments provided by training platforms to gain access to the necessary resources for completing the labs.

Q5) How does the AI-102 certification compare to other Azure certifications in terms of difficulty?

Ans: The AI-102 certification is specialized and requires a good understanding of AI and machine learning concepts, making it more challenging than foundational certifications but comparable to other specialized certifications like Azure Data Scientist or Azure Developer.

Next Task For You

Unlock the power of AI & ML on Azure in our Free Masterclass!

Learn from experts, get hands-on with Azure’s tools, and explore certification strategies to land high-paying roles. Plus, receive a personalized roadmap to your AI/ML career and a special gift if you stay until the end!

Book Your Free Seat Now by clicking the image below:

Picture of mike

mike

I started my IT career in 2000 as an Oracle DBA/Apps DBA. The first few years were tough (<$100/month), with very little growth. In 2004, I moved to the UK. After working really hard, I landed a job that paid me £2700 per month. In February 2005, I saw a job that was £450 per day, which was nearly 4 times of my then salary.