ML Two
Lecture 05
๐Ÿค—Object detection with CreateML + Live Capture App๐Ÿ˜Ž
Welcome ๐Ÿ‘ฉโ€๐ŸŽค๐Ÿง‘โ€๐ŸŽค๐Ÿ‘จโ€๐ŸŽค
First of all, don't forget to confirm your attendence on Seats App!
as usual, an AI-related cool project to wake us up
and another AI-related cool project to wake us up one more time
our previous classification app:
it works on a single static image
the final product of today's app
it works on live capture "video" in real time!!!
after today's lecture:
-- object detection: how to prepare dataset, how to train ๐Ÿค–
-- a live capture app that recognises sea creatures ๐Ÿฆˆ
What's the difference between a static image, a static video, a live capture video?
on digital devices, video is nothing more than a sequence of images(frames)
=>
most video processings eventually boil down to good old image processings
=>
all the image-based AIs in our toolbox are ready to go for video processing
One key difference between static video and live capture video is that static video is pre-recorded๐Ÿ“ฝ๏ธ๐ŸŽž๏ธ, while live capture video is captured in real-time๐Ÿ“ธ๐Ÿคณ
pre-recorded video: a fixed set of frames
vs.
live capture video: a dynamic set of frames that keeps flowing in
real-time processing is an interesting topic (think about digital music instrument)
where processing speed imposes bottleneck for how "real time" the output is
In Apple framework, we have AVFundation doing live capture for us
and we are counting on CoreML and optimised chips to have fast AI computation that brings smooth real time experience rather than laggging
Let's play with the app first
- Preferably run it on your phone not the simulator
- Don't forget to change the "Team" to your account under "Signings & Capabilities" tab
next: train our own object detection model using CreateML (no python this time) and integrate it into the app
the re-occuring machine learning workflow throughout our unit
a gentle reminder of object detection
part 1: data collection
CreateML has its own data format for training, check it out here
10 mins read, take a note of keywords and alien words!
to save us some headaches from finding data, annotating and formatting
introduce... roboflow
These datasets are annotated, split and you can choose to download from a range of formats including CreateML format ๐Ÿฅฐ
15 mins browsing this dataset library, find one dataset that you like
take a note of:
-- how many images are there?
-- what are the classes available?
-- have you seen the annotation window?
-- does each image have the same dimension?
also ask chatgpt why dataset is split into training/validation/testing sets for training an AI?
validation, evaluation and testing are very similar notions
they all seek to answer this question "how does my model perform on *unseen* data?"
The keyword to model performance is *generalizability* which can only be evaluated on unseen data.
- using a set for training or even just hyper parameter searching will "pollute" its unseenness
-- โ˜๏ธwe need valiadation set for monitoring performance during training
-- โœŒ๏ธwe need testing set for selecting the best model after training
-- ๐Ÿ‘Œwe need evaluation set for evluating the final best model
let's try the aquarium dataset!
after clicking the "Download" button
don't forget to select the life-saving "CreateML" format
CreateML time!
your turn:
--1. add train/val/test data folder into the data sources
--2. select transfer learning
--3. enter a smaller number of iteration (e.g. 1k)
this is just for lecture demonstration purpose, in practice you can try a larger number of iterations
--4. fire off the training!!!
๐ŸŽ‰
๐Ÿ‘๏ธ Two art projects with object(Face) detection models in live capture:
- Female figure by Jordan Wolfson
- Hello by XuZhen
what does I/U mean?
it is a metric for measuring the similarity of two bounding boxes (the groudtruth box and the box predicted by the newbie AI)
15 mins read here
-- what does I/U 50% mean?
preview the result and export the model
import the model to our live capture APP and change this line:
change the model name in line 24 to your model class name
play with the app!
question 1: from the entire pipeline
-- data collection -- training -- integration
where do we specify/input the target classes information?
this is equivalent to ask, if i don't want a sea creature live detector but a car model live detector insteads, what are the steps to go?
question 2: can you think of possible extension/modification of this app to do some cool AR stuff?
๐Ÿ”ฅ
โœŠ
๐Ÿ‘Ÿโšฝ๏ธ
20-30 mins lil exercise:
-- select another dataset on Roboflow and download
-- train an abject detector using CreateML (enter small interation number)
-- update your app with the new model
๐ŸŽ‰
today we talked about:
-- static vs. live capture content
-- real-time processing, computation speed as bottleneck
-- object detection datasets on Roboflow
-- object detection training using CreateML
-- object detection evaluation metric
-- integrate new model into live capture app
a recent AI paper&project, SimCity of AIs!
We'll see you next week same time same place! ๐Ÿซก