PriorityEye Pitch - UCL First Response in a Box Design Hackathon 2023

Channel:
Subscribers:
53
Published on ● Video Link: https://www.youtube.com/watch?v=4T_TzGmXpB4



Duration: 3:01
68 views
0


PriorityEye Pitch - UCL First Response in a Box Design Hackathon 2023

Link to wesbite - https://happy-field-0b37b4f03.3.azurestaticapps.net
Link to video https://liveuclac-my.sharepoint.com/:v:/g/personal/ucabtwc_ucl_ac_uk/EcOqsfED1nBAnBRrB8Voa78BsU-GT8r6CBNt1lok6mrAXw?nav=eyJyZWZlcnJhbEluZm8iOnsicmVmZXJyYWxBcHAiOiJPbmVEcml2ZUZvckJ1c2luZXNzIiwicmVmZXJyYWxBcHBQbGF0Zm9ybSI6IldlYiIsInJlZmVycmFsTW9kZSI6InZpZXciLCJyZWZlcnJhbFZpZXciOiJNeUZpbGVzTGlua0RpcmVjdCJ9fQ&e=iAD6z2

Assumptions about the UAV
- Has a camera to take video images
- Has an onboard gpu e.g NVIDIA Jetson
- Has intermittent / low bandwidth connectivity with command centre

We have not touched on the issue of AI for navigation / search

Models and Data

1)DETR : End-to-End Object Detection with Transformers / YOLO
https://github.com/facebookresearch/detr
https://ai.meta.com/research/publications/end-to-end-object-detection-with-transformers/
https://github.com/ultralytics/ultralytics

Offroad Human Detection Dataset of people in orchards/farms - https://www.nrec.ri.cmu.edu/solutions/agriculture/other-agriculture-projects/human-detection-and-tracking.html
TinyPerson for detecting humans from a distance in offroad scenario - https://github.com/ucas-vg/PointTinyBenchmark/tree/master/dataset
Human Detection from Drone Images dataset - https://arxiv.org/pdf/1804.07437.pdf
MPII Human Pose Dataset for humans in various positions / doing different activities - http://human-pose.mpi-inf.mpg.de/
PTB-TIR Thermal Infrared Pedestrian Tracking https://github.com/QiaoLiuHit/PTB-TIR_Evaluation_toolkit
Thermal Imaging from UAV - https://zenodo.org/record/4327118

DETR/YOLO can be trained to detect a small class of objects, and there are examples where it has been trained to detect humans at various distances and background https://towardsdatascience.com/easy-object-detection-with-facebooks-detr-d0bd9e4e53a4. As the first step towards search and rescue is to identify humans from a distance in various backgrounds. If the UAV also has an infrared camera, it may also be possible to train to detect thermal images of humans using the PTB-TIR dataset and UAV Thermal Imaging dataset.

2) U-NET
https://arxiv.org/abs/1505.04597
https://github.com/milesial/Pytorch-UNet

Small Kaggle Flood Dataset - https://www.kaggle.com/datasets/faizalkarim/flood-area-segmentation
Large Kaggle Flood Dataset - https://www.kaggle.com/datasets/hhrclemson/flooding-image-dataset
Kaggle Flood Dataset with roads - https://www.kaggle.com/datasets/saurabhshahane/roadway-flooding-image-dataset

U-NET is useful for segmenting images into different classes, and there have been examples on kaggle and other publications where segmentation of an image is able to detect areas of water / flood. This would be helpful for identifying if the individual is trapped due to being surrounded by water and could also be used to plan routes and equipment for rescue team.

3) BLIP
https://github.com/salesforce/BLIP

Paper describing an image caption model/dataset for dangerous situations - https://arxiv.org/pdf/1711.02578.pdf
Another paper describing image caption model/dataset for dangerous situations and might explain how they curated their dataset, but unable to access - https://ieeexplore.ieee.org/document/9753788/

For victims in a flood or other disaster settings, they may have injuries and may also be under threat from their environment. Using image captioning systems, we may be able to generate a representation of those images that could be used for downstream classification of how severe or dangerous a situation is for that person, and likely how much time we have to rescue them. There are no readily available datasets for this specific task as it is likely in extreme settings so this is challenging. The downstream regression/classification of danger severity could be performed directly on the caption output.

4) OpenPose + PointNet
https://github.com/CMU-Perceptual-Computing-Lab/openpose
https://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch
https://arxiv.org/abs/1612.00593 - PointNet

MPII Human Pose Dataset for humans in various positions / doing different activities - http://human-pose.mpi-inf.mpg.de/

OpenPose is a pre-trained model that is able to process images / video data and generate a set of points which can then be used for further downstream classification.

5) Remote Heart Rate and Respiratory Rate monitoring
Ali Al-Naji, Kim Gibson & Javaan Chahl (2017) Remote sensing of physiological signs using a machine vision system, Journal of Medical Engineering & Technology, 41:5, 396-405, DOI: 10.1080/03091902.2017.1313326

This is a handcrafted model to estimate heart rate and respiratory rate from video imaging at a distance of about 50m. The model was only validated in 10 individuals and in very specific settings e.g front facing.




Other Videos By DoctorDeano


2023-10-05UCL EnergyGuard v2 app - measure how much electricity is being used by your software and games
2023-10-01Community Voice Pitch - UCL First Response in a Box Design Hackathon 2023
2023-10-01Lifeline Pitch - UCL First Response in a Box Design Hackathon 2023
2023-10-01Flow and Find Pitch - UCL First Response in a Box Design Hackathon 2023
2023-10-01Know your Fire Pitch - UCL First Response in a Box Design Hackathon 2023
2023-10-01Proxima Pitch - UCL First Response in a Box Design Hackathon 2023
2023-10-01FireGuard Pitch - UCL First Response in a Box Design Hackathon 2023
2023-10-01AISOS Pitch - UCL First Response in a Box Design Hackathon 2023
2023-10-01Beacon Pitch - UCL First Response in a Box Design Hackathon 2023
2023-10-01SecureEscape Pitch - UCL First Response in a Box Design Hackathon 2023
2023-10-01PriorityEye Pitch - UCL First Response in a Box Design Hackathon 2023
2023-09-17Nerea Sainz De La Melon & Anelia Gaydardzhieva - UCL MI3 Galleries Navigation
2023-09-17Gauri Desai & Zineb Flahy - EyeGaze Navigator with UCL MotionInput 3
2023-09-17UCL IXN 2022-23 - Sachi Lad - British Red Cross with PowerApps and PowerBI
2023-08-30UCL MotionInput 3.3 - Dysarthric Speech progress video
2023-06-19UCL MotionInput v3.2 Gaming Features LIVE presentations by UCL CS students at Microsoft Reactor
2023-06-16UCL MotionInput v3.2 demonstrations at Microsoft Reactor, London
2023-06-14UCL MotionInput v3.2 Gaming Features LIVE presentations by UCL CS students at Microsoft Reactor
2023-06-14UCL MotionInput v3.2 Gaming Features - presented at Microsoft Reactor, Shoreditch London
2023-06-12COMP0016 34 - MotionInput 3 (March 2022) In Air Keyboard, In-Air Multitouch and In-Air Inking
2023-06-12UCL MotionInput 3.2 - Swimming gesture demo with Unity



Tags:
ucl
red cross
AI
crisis
drones