AWS ML Specialty: Step-by-Step Hands-On Guide 2025 by K21 Academy

MLS-C01_BlogImage
AWS AIML

Share Post Now :

HOW TO GET HIGH PAYING JOBS IN AWS CLOUD

Even as a beginner with NO Experience Coding Language

Explore Free course Now

Table of Contents

Loading

This blog post is your ultimate guide to mastering the AWS Certified Machine Learning – Specialty Certification and unlocking career opportunities as a skilled ML Engineer. Dive into our comprehensive Step-By-Step Activity Guides, meticulously crafted to equip you with the practical expertise needed to design, implement, and deploy cutting-edge ML solutions using AWS. Whether you’re looking to enhance your resume, ace job interviews, or confidently pass the certification exam, this hands-on walkthrough will prepare you for success in the ever-evolving world of AI and machine learning. Your journey to achieving this prestigious certification and advancing your career starts here!

List of labs that we include in our AWS Certified Machine Learning Specialty Program

1.1 AWS Basic Labs

  • Lab 1: Create an AWS Free Trial Account
  • Lab 2: CloudWatch – Create Billing Alarm & Service Limits

1.2 Data Engineering in AWS

  • Lab 1: Analyzing CSV Data in S3 with Amazon Athena
  • Lab 2: Life cycle management on S3 Bucket
  • Lab 3: Transfer data to S3 using Amazon Kinesis Firehose
  • Lab 4: Creating Managed Service Apache Flink For SQL Applications
  • Lab 5: Creating and Managing AWS Glue Crawlers for Data Cataloging
  • Lab 3: Creating and Running AWS Glue ETL Jobs for Data Transformation

1.3: Data Analysis & Transformation

  • Lab 1: Create And Query An Index With Amazon Kendra
  • Lab 2: Discover sensitive data present in an S3 bucket using Amazon Macie
  • Lab 3: Prepare, Analyze Training Data for ML with SageMaker Data Wrangler & Clarify
  • Lab 4: Preparing Data for TF-IDF with Sagemaker Jupyterlab

1.4 Machine Learning with SageMaker

  • Lab 2: Build, Train, and Deploy a Machine Learning Model with Amazon Sagemaker

1.5 High-Level ML Services

  • Lab 1:Perform Sentiment Analysis with Amazon Aurora PostgreSQL DB and AWS Comprehend ML integration
  • Lab 2: Analyze Insights in text with Amazon Comprehend
  • Lab 3: Using Amazon SageMaker JumpStart to load and use GPT from HuggingFace
  • Lab 4:Build a sample chatbot using Amazon Lex
  • Lab 5:Amazon Lex Chatbot Using 3rd Party API
  • Lab 6:Enhance Privacy in Your Transcriptions with Amazon Transcribe
  • Lab 7:Enhancing Transcription Accuracy with Amazon Transcribe Custom Vocabulary

1.6: Machine Learning Implementation & Operations

  • Lab 1: Tuning, Deploying, and Predicting with Tensorflow on SageMaker
  • Lab 2: Training and Deploying an ML Model with a low-code solution

1.7: Gen AI

  • Lab 1: Build a Bedrock Agent with Action Groups, Knowledge Bases, and Guardrails
  • Lab 2:Building and Querying a RAG System with Amazon Bedrock Knowledge Bases
  • Lab 3: Image Style Mixing Using the Foundation Model of Amazon Bedrock

1.8: Deep Learning & Hyperparameter Tuning

  • Lab 1: Train a Deep Learning Model With AWS DL Containers
  • Lab 2: Optimize hyperparameters with Amazon SageMaker Automatic Model Tuning

1.1 AWS Basic Labs

Lab 1: AWS Basic Labs

Embark on your AWS journey by setting up a free trial account. This hands-on lab guides you through the initial steps of creating an AWS account, giving you access to a plethora of cloud services to experiment and build with.

Amazon Web Services (AWS) is providing a free trial account for 12 months to new subscribers to get hands-on experience with all the services that AWS provides. Amazon is giving us a number of different services that we can use, with some limitations, to get hands-on practice and gain more knowledge on AWS Cloud services as well as regular business use. With the AWS Free Tier account, all the services offered have limited usage limits on what we can use without being charged. Here, we will look at how to register for an AWS FREE Tier Account.

To learn how to create a free AWS account, check our Step-by-step blog, How To Create AWS Free Tier Account
AWS Free Tier

Lab 2: CloudWatch – Create Billing Alarm & Service Limits

Dive into CloudWatch, AWS’s monitoring service. This lab focuses on setting up billing alarms to manage costs effectively and keeping an eye on service limits to ensure your applications run smoothly within defined boundaries.

AWS billing notifications can be enabled using Amazon CloudWatch. CloudWatch is an Amazon Web Services service that monitors all of your AWS account activity. CloudWatch, in addition to billing notifications, provides infrastructure for monitoring apps, logs, metrics collection, and other service metadata, as well as detecting activity in your AWS account usage.

AWS CloudWatch offers a number of metrics through which you can set your alarms. For example, you may set an alarm to warn you when a running instance’s CPU or memory utilisation exceeds 90% or when the invoice amount exceeds $100. We get 10 alarms and 1,000 email notifications each month with an AWS free tier account.

Free-tier-service-limit

To learn about CloudWatch, check our Step-by-step blog, CloudWatch vs. CloudTrail: Comparison, Working & Benefits 

1.2 Data Engineering in AWS

Lab 1: Analyzing CSV Data in S3 with Amazon Athena

Objective: To effectively analyze CSV data stored in Amazon S3 using Amazon Athena, while leveraging its powerful SQL query capabilities to gain meaningful insights, uncover data patterns, and seamlessly streamline the overall process of data analysis.

This Lab methodically guides you through creating a table in Amazon Athena for querying sample CSV data stored in Amazon S3. To begin with, you’ll set up S3 buckets to act as the source and destination for your data, followed by configuring Amazon Athena for querying. Subsequently, you’ll leverage SQL queries to extract meaningful insights. In essence, this process provides a serverless, cost-effective, and efficient solution for data analysis, eliminating the need for complex ETL processes.

By the end of this lab, you will have successfully set up Amazon Athena, created a table for structured querying, and gained hands-on experience in analyzing large datasets using SQL, making data analysis accessible and efficient.

To learn about AWS Athena, check our blog, Amazon Athena: Exploring Cloud Data Insights

Lab 2: Life Cycle Management on S3 bucket

Objective: To implement Amazon S3 Lifecycle Management to optimize storage costs, automate data transitions, and ensure compliance with retention policies.

This lab seamlessly guides you through the process of configuring and managing Lifecycle Management for an Amazon S3 bucket on your AWS account. You’ll step-by-step learn to create lifecycle rules, transition objects between storage classes, and automate data expiration, while simultaneously ensuring proper permissions and configurations are in place. Ultimately, by the end of this lab, you’ll have effectively and successfully implemented S3 Lifecycle Management, optimizing storage costs while simultaneously preparing your environment for efficient data management and compliance.

By the end of this lab, you’ll have successfully implemented S3 Lifecycle Management, optimizing storage costs and preparing your environment for efficient data management and compliance.

To learn about S3 bucket, check our blog, AWS S3 Bucket | Amazon Simple Storage Service Bucket

Lab 3: Transfer data to S3 using Amazon Kinesis Firehose

Objective: To effectively configure and seamlessly manage Amazon Kinesis Firehose for streaming real-time data from diverse sources to an Amazon S3 bucket, thereby enabling near-real-time analysis and ensuring optimized data storage for enhanced analytics and reporting.

This Lab methodically guides you through setting up a comprehensive environment to use Amazon Kinesis Data Firehose for real-time data transfer. To begin with, you’ll create an S3 bucket for data storage, followed by configuring a Kinesis Firehose delivery stream. Next, you’ll establish CloudWatch Logs for monitoring and deploy a Virtual Private Cloud (VPC) to ensure secure data flow. Furthermore, you’ll set up an EC2 instance to generate traffic, while continuously monitoring logs in CloudWatch. Finally, you’ll configure subscription filters to seamlessly stream log data to the S3 bucket, ensuring efficient and secure data transfer throughout the process.

By the end of this lab, you will have implemented a reliable, scalable, and efficient real-time data streaming pipeline using Amazon Kinesis Firehose, gaining hands-on experience in handling and analyzing large-scale streaming data.

To learn about Amazon Kinesis Firehose, check our blog, What is AWS Kinesis (Amazon Kinesis Data Streams)?

Lab 4: Creating Managed Service Apache Flink For SQL Applications

Objective: To create and deploy a real-time stream processing application using Amazon Managed Service for Apache Flink, leveraging Flink SQL for querying, aggregating, and analyzing live data streams efficiently without managing the underlying infrastructure.

This lab comprehensively demonstrates how to utilize Amazon Managed Service for Apache Flink to efficiently process real-time streaming data. Step-by-step, you will set up an Apache Flink Studio Notebook, seamlessly ingest live data from Amazon Kinesis, perform advanced SQL-based queries and analytics, and ultimately output results for further in-depth analysis. Moreover, this lab equips you with the ability to build scalable and low-latency applications for real-time analytics, thereby enabling automated and data-driven decision-making processes.

By the end of this lab, you will have gained practical hands-on experience in effectively setting up and comprehensively managing Apache Flink applications, seamlessly integrating data from Amazon Kinesis, and efficiently processing live streams using Flink SQL, all while working within a fully managed and optimized AWS environment.

Lab 5: Creating and Managing AWS Glue Crawlers for Data Cataloging

This Lab systematically guides you through setting up and managing AWS Glue Crawlers to organize and catalog data stored in S3 buckets. To start with, you will create an S3 bucket for storing raw data, then proceed to configure an IAM role to securely grant AWS Glue permissions. Following this, you will create a crawler to scan and generate metadata tables in the AWS Glue Data Catalog. Ultimately, these tables provide a structured and accessible view of your data, facilitating seamless integration for ETL (Extract, Transform, Load) processes.

By the end of this lab, you will have successfully configured an AWS Glue Crawler, enabling automated data discovery, metadata management, and efficient data handling for analytics and compliance.

To learn about AWS Glue, check our blog, AWS Glue: Overview, Features, Architecture, Use Cases & Pricing

Lab 6: Creating and Running AWS Glue ETL Jobs for Data Transformation

Objective: To create and run AWS Glue ETL jobs to automate the Extract, Transform, and Load (ETL) process, enabling efficient data transformation and seamless storage for analytics and reporting.

This lab efficiently guides you through automating ETL processes using AWS Glue. You’ll step-by-step learn to create a destination S3 bucket for transformed data, set up an ETL job in AWS Glue, transform data with schema adjustments and type conversions, and load the transformed data into the destination bucket. By following this hands-on approach, you’ll simplify data preparation for analytics and reporting while also enhancing efficiency and scalability.

By the end of this lab, you’ll have a clear understanding of how to implement and monitor ETL workflows using AWS Glue, empowering you to handle large datasets with minimal manual intervention and prepare data effectively for business insights.

 

 

To learn about AWS Glue, check our blog, AWS Glue: Overview, Features, Architecture, Use Cases & Pricing

1.3: Data Analysis & Transformation

Lab 1: Create And Query An Index With Amazon Kendra

Objective: To create and manage an Amazon Kendra index that enables natural language search across multiple enterprise data sources. This includes setting up data ingestion from Amazon S3 using connectors, configuring FAQs for quick responses, querying the index to retrieve relevant information, and cleaning up resources after use. The lab demonstrates how to leverage Amazon Kendra’s machine learning capabilities to provide unified, accurate search results across diverse data repositories

Create_and_Query_an_index_with_Amazon_Kendra

To learn about AWS Glue, check our blog: AWS AI, ML, and Generative AI Services and Tools

Lab 2: Discover sensitive data present in an S3 bucket using Amazon Macie

Objective: To learn how to use Amazon Macie to automatically discover and protect sensitive data stored in Amazon S3. This includes creating and managing S3 buckets, enabling Macie, configuring and running Macie jobs to scan for sensitive information like personally identifiable data (PII), reviewing findings, and properly cleaning up resources to stay within AWS Free Tier limits. The lab helps enhance data security and compliance by automating sensitive data discovery at scale.

Amazom macie

To learn about AWS Glue, check our blog: AWS Macie

Lab 3:Prepare, Analyze Training Data for ML with SageMaker Data Wrangler & Clarify

Objective: To learn how to prepare, clean, transform, and analyze training data for machine learning using Amazon SageMaker Data Wrangler and Clarify. This includes importing raw data, performing data transformations, encoding categorical variables, scaling numeric features, checking for bias, and integrating the prepared data with SageMaker Autopilot for automated model training and deployment. The lab equips you with skills to efficiently handle data preparation with minimal coding for building high-quality ML models.

By the end of this lab, you will have successfully:

  • Set up Amazon SageMaker Studio and Data Wrangler environment

  • Imported raw dataset from Amazon S3 into SageMaker Data Wrangler

  • Explored and analyzed data quality and insights using built-in reports

  • Applied data transformations such as removing duplicates, handling missing values, encoding categorical features, and scaling numeric columns

  • Checked for data bias using SageMaker Clarify and generated bias reports

  • Exported the prepared dataset to Amazon S3 for downstream use

  • Integrated the prepared data with SageMaker Autopilot to train, evaluate, and deploy a machine learning model

  • Managed and deleted AWS resources to avoid unnecessary costs

To learn about AWS Glue, check our blog: Amazon SageMaker AI for Machine Learning

Lab 4: Preparing Data for TF-IDF with Sagemaker Jupyterlab

Objective: To learn how to implement the TF-IDF (Term Frequency-Inverse Document Frequency) algorithm for text analysis using Amazon SageMaker’s Jupyter Notebook environment. This includes setting up a SageMaker notebook instance, preprocessing text data, calculating TF-IDF scores to evaluate word importance across documents, and managing resources effectively. The lab enables you to extract meaningful textual features to support tasks like keyword extraction, topic categorization, and content analysis.

Preparing Data for TF-IDF with Sagemaker JupyterLab

1.4: Machine Learning with SageMaker

Lab 1: Build, Train, and Deploy a Machine Learning Model with Amazon SageMaker

Objective: To build, train, and deploy a machine learning model using Amazon SageMaker, leveraging the XGBoost algorithm to predict customer behavior. This involves preparing data, training the model, deploying it for real-time inference, and evaluating its performance for actionable insights.

This lab effectively demonstrates how to use Amazon SageMaker to build, train, and deploy a machine-learning model. You will sequentially set up a SageMaker notebook instance, preprocess the Bank Marketing Dataset, train an XGBoost model, and deploy it to an endpoint for real-time predictions. Furthermore, as you progress, you will evaluate the model’s performance using a confusion matrix, gaining valuable insights into its predictive accuracy.

Successfully gain hands-on experience in creating SageMaker environments. Additionally, you will learn the process of training and deploying models, and furthermore, performing inference to evaluate their performance. Ultimately, this lab will enable you to manage end-to-end machine learning workflows within a fully managed service, thus ensuring an efficient and streamlined approach to building and deploying machine learning solutions.

To learn about AWS Sagemaker, check our blog, Amazon SageMaker AI For Machine Learning: Overview & Capabilities

1.5: High-Level ML Services

Lab 1: Perform Sentiment Analysis With Amazon Aurora PostgreSQL DB and AWS Comprehend ML Integration

Objective: To integrate Amazon Aurora PostgreSQL with Amazon Comprehend for real-time sentiment analysis, enabling businesses to monitor customer feedback and improve services effectively.

This hands-on lab demonstrates how to use Amazon Aurora Machine Learning integration with AWS Comprehend to perform sentiment analysis on customer reviews stored in an Aurora database. You will create IAM roles for secure access, set up an Aurora PostgreSQL database, and configure a PostgreSQL client to interact with the database. Using SQL queries, you’ll enable seamless sentiment analysis by leveraging Amazon Comprehend’s ML capabilities, empowering organizations to gain actionable insights from their customer feedback efficiently.

To learn about Amazon Comprehend, check our blog, What is AWS Comprehend: Natural Language Processing in AWS

Lab 1: Analyze Insights in Text With Amazon Comprehend

Objective: To utilize Amazon Comprehend for analyzing customer feedback, performing sentiment analysis, and deriving actionable insights to enhance customer satisfaction and product offerings.

This engaging hands-on lab demonstrates the use of Amazon Comprehend for text analysis, enabling businesses to classify customer sentiments as positive, negative, neutral, or mixed. Through the comprehensive sentiment analysis of multiple customer reviews, you will delve into key insights such as entities, key phrases, language, syntax, and Personally Identifiable Information (PII). This practical and insightful approach empowers businesses to better understand customer opinions and make well-informed decisions to enhance their services and products.

To learn about Amazon Comprehend, check our blog, What is AWS Comprehend: Natural Language Processing in AWS

Lab 3: Using Amazon SageMaker JumpStart to load and use GPT from HuggingFace

Objective: To learn how to deploy and use the Falcon 40B large language model from Hugging Face on Amazon SageMaker JumpStart for advanced text generation tasks. This includes setting up SageMaker Studio and JumpStart, deploying the model, configuring inference parameters, and applying different prompting techniques such as Zero-Shot, Few-Shot, and Chain-of-Thought to generate high-quality, task-specific outputs. The lab also covers resource management and cleanup to optimize costs.

Using Amazon SageMaker JumpStart to load and use GPT from HuggingFace

Lab 4: Build a sample chatbot using Amazon Lex

Objective: To build a custom text-based chatbot using Amazon Lex that can understand and respond to user intents such as greetings and food orders. This lab guides you through creating a bot, defining intents with sample utterances and responses, building and testing the chatbot, and finally cleaning up resources. By the end, you will have hands-on experience designing conversational interfaces that improve customer interactions through automated dialogue handling.

Build a sample chatbot using Amazon LexLab 5: Amazon Lex Chatbot Using 3rd Party API

Objective: To build a text-based chatbot using Amazon Lex integrated with a third-party REST API via AWS Lambda. This lab guides you through creating a Lambda function to fetch real-time country information, configuring intents and slots in Amazon Lex, linking the Lambda function for fulfillment, building and testing the chatbot, and cleaning up resources. By the end, you will have hands-on experience creating conversational AI that delivers dynamic, real-time responses by combining Lex’s natural language understanding with external data sources.

Amazon Lex Chatbot Using 3rd Party APILab 6: Enhance Privacy in Your Transcriptions with Amazon Transcribe

Objective: To learn how to enhance privacy in audio transcriptions using Amazon Transcribe by enabling PII (Personally Identifiable Information) redaction. This includes uploading audio files to Amazon S3, creating and configuring transcription jobs with PII redaction enabled, reviewing redacted transcripts to ensure sensitive data is protected, and managing resources efficiently. The lab equips you with practical skills to securely transcribe sensitive audio content while complying with privacy standards.

Lab 7: Enhancing Transcription Accuracy with Amazon Transcribe Custom Vocabulary

Objective: To enhance transcription accuracy in Amazon Transcribe by creating and using a custom vocabulary tailored for specialized terminology such as technical jargon and brand names. This involves uploading an audio file to an Amazon S3 bucket, creating a custom vocabulary with specific terms and pronunciations, running a custom transcription job using that vocabulary, and managing resources effectively. This lab helps ensure more precise transcriptions suited to domain-specific language needs.

Enhancing Transcription Accuracy with Amazon Transcribe Custom Vocabulary

1.6: Machine Learning Implementation & Operations

Lab 1: Tuning, Deploying, and Predicting with Tensorflow on SageMaker

Objective: To leverage Amazon SageMaker’s fully managed machine learning environment to build, train, and deploy a deep learning model based on TensorFlow’s Convolutional Neural Network (CNN) architecture. This model will classify product images with high accuracy to improve inventory management and search functionality for Ubisoft’s e-commerce platform. The process includes preparing the dataset, tuning model hyperparameters for optimal performance, scaling training using GPU instances, and deploying the model for real-time predictions. Ultimately, this solution aims to enhance operational efficiency and user experience by providing fast and reliable image classificationTuning, Deploying, and Predicting with Tensorflow on SageMaker

Lab  2: Training and Deploying an ML Model with a low-code solution

Objective: To leverage Amazon SageMaker Canvas, a no-code machine learning platform, to build, train, and deploy a predictive model that estimates taxi fare amounts using historical NYC taxi trip data. This lab focuses on preparing and transforming data, addressing data quality issues, training an accurate regression model, evaluating its performance, and deploying the model for real-time predictions—all without writing code. By the end of this hands-on lab, you will gain practical experience in executing an end-to-end machine learning workflow with SageMaker Canvas, including data wrangling, model building, and deployment, thereby enabling efficient, scalable, and accessible AI-driven solutions for fare prediction.

Training and Deploying ML Model with low code solution

1.7: Gen AI

Lab  1: Build a Bedrock Agent with Action Groups, Knowledge Bases, and Guardrails

Objective: To build and deploy an intelligent Amazon Bedrock agent that integrates domain-specific knowledge bases, real-time data retrieval via AWS Lambda, and guardrails to ensure secure, relevant, and compliant responses. This lab enables hands-on experience in creating AI agents capable of answering self-employment questions and providing live weather updates, combining large language model capabilities with real-time action execution and safety controls.

Build a Bedrock Agent with Action Groups, Knowledge Bases, and GuardrailsLab 2: Building and Querying a RAG System with Amazon Bedrock Knowledge Bases

Objective: To build an intelligent, scalable knowledge management system by leveraging Amazon Bedrock Knowledge Bases integrated with Retrieval-Augmented Generation (RAG). This system enhances information retrieval by indexing large document collections stored in Amazon S3, converting text into vector embeddings, and enabling efficient, context-aware search and question-answering capabilities. The solution aims to improve organizational productivity and customer support by providing accurate, relevant, and up-to-date information through AI-powered retrieval.

RAG

1.8: Deep Learning & Hyperparameter Tuning

Lab 1: Train a Deep Learning Model With AWS DL Containers

Objective: To set up a secure and optimized AWS environment using AWS Deep Learning Containers on an EC2 instance, enabling the training of a TensorFlow deep learning model. This involves creating AWS access keys, launching a deep learning instance, configuring secure access via PuTTY, pulling Docker images from Amazon ECR, and running a deep learning training job efficiently while managing resource security and cost.

Lab 2: Optimize hyperparameters with Amazon SageMaker Automatic Model Tuning

Objective: Amazon SageMaker Automatic Model Tuning (AMT) automates hyperparameter optimization to find the best hyperparameter combinations for machine learning models such as XGBoost. Hyperparameters control model training behavior (e.g., learning rate, max depth), impacting accuracy and generalization. Manual tuning is time-consuming and costly due to the large search space, which AMT addresses by running multiple training jobs in parallel to efficiently explore this space.

Frequently Asked Questions

1. What is the AWS Certified Machine Learning – Specialty Certification, and who should pursue it?

Ans: The AWS Certified Machine Learning – Specialty Certification validates expertise in designing, implementing, and maintaining machine learning solutions using AWS services. It is ideal for data scientists, ML engineers, and cloud professionals who want to demonstrate their skills in building and deploying ML models on AWS.

2. What are the prerequisites for taking the AWS Certified Machine Learning – Specialty exam?

Ans: While there are no mandatory prerequisites, it's recommended to have hands-on experience with AWS services, a strong understanding of machine learning concepts, and familiarity with data engineering workflows. Prior experience with tools like Amazon SageMaker, Glue, and Athena can be particularly beneficial.

3. What hands-on labs are included in this program to prepare for the certification?

Ans: The program offers a variety of labs covering essential topics such as: Setting up an AWS Free Tier Account and using CloudWatch for billing alarms. Data engineering with AWS Glue, Athena, and Kinesis Firehose. Building, training, and deploying models with Amazon SageMaker. High-level ML services like Amazon Comprehend for sentiment analysis. Advanced topics like transformers and generative AI using Amazon Bedrock and Hugging Face models.

4. What is Amazon SageMaker, and why is it important for this certification?

Ans Amazon SageMaker is a fully managed service that provides tools to build, train, and deploy machine learning models at scale. It is a critical component of the AWS Machine Learning ecosystem and is extensively covered in the certification. Proficiency in SageMaker demonstrates the ability to manage end-to-end ML workflows efficiently.

5. How can this certification boost my career as an ML Engineer?

Ans: Achieving the AWS Certified Machine Learning – Specialty certification enhances your credibility and demonstrates your expertise in using AWS services for machine learning. It helps you stand out to employers, qualify for advanced roles, and expand your career opportunities in the rapidly growing fields of AI and machine learning.

Related References

Next Task For You

Don’t miss our EXCLUSIVE Free Training on Generative AI on AWS Cloud! This session is perfect for those pursuing the AWS Certified AI Practitioner certification. Explore AI, ML, DL, & Generative AI in this interactive session.

Click the image below to secure your spot!

GenAI on AWS COntent Upgrade

Picture of mike

mike

I started my IT career in 2000 as an Oracle DBA/Apps DBA. The first few years were tough (<$100/month), with very little growth. In 2004, I moved to the UK. After working really hard, I landed a job that paid me £2700 per month. In February 2005, I saw a job that was £450 per day, which was nearly 4 times of my then salary.