SlideShare ist ein Scribd-Unternehmen logo
1 von 63
Downloaden Sie, um offline zu lesen
Multimodal
RGB-D+RF-
basedsensing
forhuman
movement
analysis
Petteri Teikari, PhD
https://www.linkedin.com/in/petteriteikari/
SeniorResearchFellow
UCLQueen SquareInstituteofNeurology
Version “18/11/19“
Executive
Summary
1-3xRealSenses+WiFiSensing
For real-worlddeployment, oneRealSense probably enough, but additionalcameras probably
can reduce someambiguitywhen gettingthe pilot data?
‘’’’’’’’’’’’’’’’’’’’’’’’’’
https://www.intel.com/content/dam/support/us/en/documents/em
erging-technologies/intel-realsense-technology/RealSense_Multipl
e_Camera_WhitePaper.pdf
Can WiFi EstimatePersonPose?
FeiWang, Stanislav Panev, Ziyi Dai, JinsongHan, DongHuang
Xi’an JiaotongUniversityCarneige MellonUniversityZhejiangUniversity
https://arxiv.org/abs/1904.00277
https://github.com/geekfeiw/WiSPPN
WiFi Signals and Channel State Information (CSI) For the human
sensing application, human body as an object, is able to make carrier
change. In this paper, we aims to learn the mapping rule from the
change to single person pose coordinates. We set WiFi working within
a 20MHz band, the CSI of 30 carriers can be obtained through a
open-source tool [Halperin et al. (2011) Cited by 585
].
+
Edge
Computing
Real-timedeep
learning
NVIDIAJetsonNano
$99
https://developer.nvidia.com/embedded/jetson-nano-developer-kit
Power consumptionremainslowatabout5-10Watts
472GFLOPSofFP16
https://www.phoronix.com/scan.php?page=article&item=jetson-nano-cooling&num=3
Runshotwiththedefaultpassiveheatsink
IntelNeuralComputeStick2
https://software.intel.com/en-us/neural-compute-stick
capableof100GFLOPSofperformancewhilstonly
consuming1Wofpower
https://movidius.github.io/ncsdk/tensorflow.html
$69
USD
Coral,GoogleTPU onRaspberryPi or the Dev Board
GoogleCoralEdgeTPUBoard Vs NVIDIAJetsonNanoDevboard — Hardware
Comparison https://towardsdatascience.com/google-coral-edge-tpu-board-vs-nvidia-jetson-nano-dev-board-hardware-comparison-31660a8bda88
Veryfewresultsarepresentfor theCoralEdgeTPUboardasit
cannotrunpre-trainedmodelswhichwerenottrainedwith
quantizationawaretraining.IntheaboveresultsJetsonused
FP16precision.https://github.com/jolibrain/dd_performances
https://www.phoronix.com/scan.php?page=article&item=nvi
dia-jetson-nano&num=3
Manu Suryavansh: “ In my opinion the Coral Edge TPU dev board is better because of the below reasons
—
1. The Coral dev board at $149 is slightly expensive than the Jetson Nano ($99) however it supports Wifi and
Bluetooth whereas for the Jetson Nano one has to buy an external wifi dongle.
2. Additionally the NXP iMX8 SOC on the coral board includes a Video processing unit and a Vivante GC700 lite
GPU which can be used for traditional image and video processing. It also has a Cortex-M4F low power micro-
controller which can be used to talk to other sensors like temperature sensor, ambient light sensor etc. More
sensors here — http://lightsensors.blogspot.com/2014/09/collection-of-various-sensors.html
The Jetson also has Video encoder and decoder units. Additionally Jetson Nano has better support for other deep
learning frameworks like Pytorch, MXNet. It also supports NVidia TensorRT accelerator library for FP16
inference and INT8 inference. Edge TPU board only supports 8-bit quantized Tensorflow lite models and you have to
use quantization aware training.
TPUs worknicelyforconvolutions
Use TCNs instead of Recurrent models
Sensing
Modalities
&Tech
RGB+D RealSense or Kinect Azure basically your options
RGB+D RealSenseor Kinect Azure basically your options
https://www.intelrealsense.com/compare-depth-cameras/
$199.00 $179.00
Sensors 2018, 18(12),4413; https://doi.org/10.3390/s18124413
In this paper, we investigate the applicability of four different RGB-D sensors for this task. We
conduct an outdoor experiment, measuring plant attribute in various distances and light conditions. Our
results show that modern RGB-D sensors, in particular, the Intel D435 sensor, provides a viable tool
for closerangephenotypingtasksinfields.
RGB+D RealSenseor Kinect Azure basically your options
Multi-CameraConfigurationforIntel®RealSense™D400Series DepthSensors
https://www.intel.co.uk/content/www/uk/en/support/articles/000028140/emerging-technologies/intel-realsense-technology.html
https://github.com/IntelRealSense/librealsense
https://www.intel.com/content/dam/support/
us/en/documents/emerging-technologies/int
el-realsense-technology/RealSense_Multiple_
Camera_WhitePaper.pdf
Synchronize
signals
Trigger Signal Generator for multi-camera setup?
If only the RealSenses used, then one camera can be used as trigger
https://www.intel.com/content/dam/support/us/en/documents/emerging-technologies/intel-realsense-technology/RealSense_Multiple_Camera_WhitePaper.pdf
Beyond
Computer
Vision?
TowardsEnvironment
Independent DeviceFree
Human ActivityRecognition
WenjunJuang et al. (2018)
https://doi.org/10.1145/3241539.3241548
Driven by a wide range of real-world applications,
significant efforts have recently been made to explore
device-free human activity recognition techniques that
utilize the information collected by various wireless
infrastructures to infer human activities without the need
for the monitoredsubjecttocarryadedicateddevice.
In this paper, we propose EI, a deep-learning based
device free activity recognition framework that
can remove the environment and subject specific
information contained in the activity data and extract
environment/subject-independent features shared by
the data collected on different subjects under different
environments. We conduct extensive experiments on
four different device free activity recognition testbeds:
WiFi, ultrasound, 60 GHz mmWave, and visible
light. The experimental results demonstrate the
superior effectiveness and generalizability of the
proposedEIframework.
Acoustic/
Ultrasound
Sensing
WenjunJuangetal.(2018)
https://doi.org/10.1145/3241539.3241548
We aim to study the effect of human activities on ultrasound
signals and evaluate the performance of the proposed system. To
achieve the goal, we employ 12 volunteers (including both men
and women) as the subjects to conduct the 6 different activities
(wiping the whiteboard,walking,moving asuitcase,rotating the chair,
sitting, as well as standing up and sitting down) that are shown in Fig.
4. The activity data are collected from 6 different rooms in two
differentbuildings.
Figure 7 shows the experiment setting in one of the rooms. The
transmitter is an iPad on which an ultrasound generator
app is installed, and it can emit an ultrasound signal of nearly 19
KHz. The receiver is a smartphone and we use the installed
recorder app to collect the sound waves. The sound signal received
by the receiver is a mixture of the sound waves traveling through the
Line-of-Sight (LOS) and those reflected by the surrounding objects,
including thehumanbodiesintheroom.
Exampleof“Ultrasound”Setup
Well if you want to call 19 kHz ultrasound
UltrasonicDistance
Sensor–HC-SR04
SEN-15569
$3.95
Single-sensor
multispeakerlistening
withacoustic
metamaterials
Yangbo Xieetal.PNAS (2015) 
https://doi.org/10.1073/pnas.1502276112
Citedby39 -Relatedarticles
Asurvey on acousticsensing
Chao Cai, Rong Zheng, Menglan Hu
(Submitted on 11 Jan 2019)
https://arxiv.org/abs/1901.03450
In this paper, we present the first survey of recent
advances in acoustic sensing using
commodity hardware. We propose a general
framework that categorizes main building blocks
of acoustic sensing systems. This framework
consists of three layers, i.e., the physical layer,
processinglayer, andapplication layer.
We highlight different sensing approaches in the
processing layer and fundamental design
considerations in the physical layer. Many existing
and potential applications including context-
aware applications, human-computer interface,
and aerial acoustic communications are
presented in depth. Challenges and future
researchtrendsare also discussed.
HandGestureRecognition BasedonActive
UltrasonicSensingofSmartphone:A
Survey ZhengjieWang et al. (August 2019)
https://doi.org/10.1109/ACCESS.2019.2933987
This paper investigates the state-of-the-art hand
gesture applications and presents a comprehensive
survey on the characteristics of studies using the
active sonar sensing system. Firstly, we review the
existing research of hand gesture recognition based on
acoustic signals. After that, we introduce the
characteristics of ultrasonic signal and describe the
fundamental principle of hand gesture
recognition. Then, we focus on the typical methods
used in these studies and present a detailed analysis on
signal generation, feature extraction, preprocessing,
and recognition methods. Next, we investigate the
state-of-the-art ultrasonic-based applications of hand
gesture recognition using smartphone and analyze
them in detail from dynamic gesture recognition and
hand tracking. Afterwards, we make a discussion about
these systems from signal acquisition, signal
processing, and performance evaluation to obtain
some insight into development of the ultrasonic
hand gesture recognition system. Finally, we
conclude by discussing the challenges, insight, and
open issues involvedinhandgesture recognitionbased
onultrasonicsignalof the smartphone.
Comparison of Systems, Including
Adopted Signal, Extracted Signal,
Sensors, Devices, Number of Devices,
Additional Sensors, and Device-Free
 channelimpulseresponse(CIR)
mmWave
Sensing
ProjectSoliindepth:Howradar-detectedgestures couldsetthePixel4apart An
experimental Google projectmayfinallybereadytomakeitswayintothe real world—and the
implicationscouldbe enormous. https://www.computerworld.com/article/3402019/google-project-soli-pixel-4.html
The Soli libraries extract real-time signals from radar hardware, outputting signal transformations, high precision
position and motion data, and gesture labels and parameters at frame rates from 100 to 10,000 frames per
second. The Soli sensor is a fully integrated, low-power radar operating in the 60-GHz ISM band.
https://youtu.be/0QNiZfSsPc0
WenjunJuangetal.(2018)
https://doi.org/10.1145/3241539.3241548
individual antenna element phase shift and the relatively small number of
antennaelements.
For example, the main lobe in the beams generated by our hardware is 30-35
degrees. In Fig. 11, we illustrate the pattern of the beam we used (beam 12) in
polar coordinates. Such imperfect beams often result in non-negligible
multipath propagation (although still weaker than in WiFi). Thus, using only
the physics laws it is very difficult to precisely model the complex
ambient environments as well as the unique characteristics of different
human subjects. Deep learning technique is an ideal solution for this problem
duetoitssuperior featureextractionability
ExampleofmmWaveSetup
Searchlight:Trackingdevicemobilityusingindo
orluminariestoadapt60 GHzbeams
(2018)WepresentSearchLight,asystemthatenables
adaptivesteeringofhighlydirectional60GHz beamsvia
passivesensingofvisiblelightfromexistingillumination
MilliBack:Real-TimePlug-n-Play Millimeter
LevelTrackingUsingWireless
Backscattering NingXiaoetal.(September 2019)
https://doi.org/10.1145/3351270
Real-time handwriting tracking is important for many emerging
applications such as Artificial Intelligence assisted education and
healthcare. Existing movement tracking systems, including those based
on vision, ultrasound or wireless technologies, fail to offer high
tracking accuracy, no learning/training/calibration process, low
trackinglatency,lowcostandeasytodeployatthesametime.
In this work, we design and evaluate a wireless backscattering based
handwriting tracking system, called MilliBack, that satisfies all these
requirements. At the heart of MilliBack are two Phase Differential
Iterative (PDI) schemes that can infer the position of the backscatter
tag (which is attached to a writing tool) from the change in the signal
phase. By adopting carefully-designed differential techniques in an
iterative manner, we can take the diversity of devices out of the equation.
The resulting position calculation has a linear complexity with the number
ofsamples,ensuringfastandaccuratetracking.
We have put together a MilliBack prototype and conducted
comprehensive experiments. We show that our system can track
various handwriting traces accurately, in some testings it achieve a
median error of 4.9 mm. We can accurately track and reconstruct
arbitrary writing/drawing trajectories such as equations, Chinese
characters or just random shapes. We also show that MilliBack can
support relatively high writing speed and smoothly adapt to the changes
ofworkingenvironment.
NewChip forMicrowaveImagingofBody
September 30th
, 2019
https://www.medgadget.com/2019/09/new-chi
p-for-microwave-imaging-of-body.html
Today’s clinicians are limited to a few imaging modalities, primarily X-
ray, CT, MRI, and ultrasound. Microwaves, in principle, can also be
used as a useful way to look inside the body. Microwave radiation is
non-ionizing, so should be safer than X-rays, but in practice
microwave imagers, because of the electronics inside, have
remained bulky tabletop devices. Not only have they been
impractical for imaging the body, the electronics inside conventional
microwaveimagershavesufferedfrominterference.
Now,researchersattheUniversityofPennsylvaniahavedevelopeda
microwave imaging chip that replaces critical electronic
components with optical ones, thereby allowing it to be much
smallerandnotsufferfromasmuchinterference.
The device is manufactured using now traditional
semiconductor techniques resulting in a chip with over 1,000
photonic components, including waveguides and photodiodes. The
device essentially works by converting microwave signals, that
bounce back from the target, into optical ones. It then uses optical
circuitry to process the data and generate an image of the target. It
is only 2 millimeters on a side, so the components are
microscopic.
Since the chip is about the size of one in your smartphone, it can be
integrated into small, potentially hand-held devices to image the
heart,spotcancer cells,andevenlookinsidethebrain.
Single-chipnanophotonicnear-fieldimager
FarshidAshtiani,AngelinaRisi,andFiroozAflatouni
OpticaVol.6,Issue10,pp.1255-1260(2019)
https://doi.org/10.1364/OPTICA.6.001255
Here we introduce and demonstrate the first single-chip nanophotonic near-
field imager, where the impinging microwave signals are upconverted to the
optical domain and optically delayed and processed to form the near-field
image of the target object. The 121-element imager, which is integrated on a
silicon chip, is capable of simultaneous processing of ultra-wideband
microwave signals and achieves 4.8° spatial resolution for near-field
imaging with orders of magnitude smaller size than the benchtop
implementationsandafractionofthepowerconsumption.
Fig. 4. Near-field imaging
results.
(a) Imaging measurement
setup. The object is at a
distance of about 0.5 m from
the receive antenna array. The
transmitter radiates UWB
pulses toward the object and the
reflected signals are received
and processed in the
nanophotonic imager chip. The
dimensions of the target objects
and their distance to the imager
are chosen to ensure that the
entire object is within the
imager field-of-view. The
transmit antenna, the target
object, and the receive antenna
array are placed inside a
shielded anechoic chamber.
(b) Three target objects and
their near-field images formed
using the implemented
nanophotonic imager are
shown.
WiFi-
Sensing
Needthereceiver
antennas,but
otherwise easyish
toimplement?
WenjunJuangetal.(2018)
https://doi.org/10.1145/3241539.3241548
Channel State Information. In this section, we make use of the
Channel State Information (CSI) to analyze the effect
of the human activities on the WiFi signal. CSI refers to
known channel properties of a communication link in wireless
communications. This information describes how a signal
propagates from the transmitter to the receiver and
represents the combined effect of, for example, scattering,
fading, and power decay withdistance .
Modern WiFi devices supporting IEEE 802.1n/ac 2.4 GHz / 5 GHz
standards have multiple transmitting and receiving antennas,
and thus can transmit data in MIMO (Multiple-Input Multiple-
Output) mode. In an Orthogonal Frequency Division
Multiplexing (OFDM) system, the channel between each pair
of transmitting and receiving antennas consists of multiple
subcarriers.
We use the tool in Halperin et al. (2011) Citedby585
* to report
CSI values of 30 OFDM subcarriers. Thus, the
dimensionality of H is 30 × Nt
× Nr
. The reason why
CSI can be used for recognizing human activities is
mainly because it is easily affected by the presence of
humans and their activities. Specifically, the human body
may block the Line-of-Sight (LOS) path and attenuate
the signal power. Additionally, the human body can
introduce more signal reflections and change the
number of propagation paths. Thus, the variance of CSI
can reflect the human movements in the WiFi
environments.
* Our toolkit uses the Intel WiFi Link 5300 wireless NIC with 3 antennas. It works
on up-to-date Linux operating systems: in our testbed we use Ubuntu 10.04 LTS with the
2.6.36 kernel
https://dhalperi.github.io/linux-80211n-csitool/
ExampleofWifiSetup
ChannelStateInformationfrom Pure
CommunicationtoSenseandTrack
HumanMotion:A Survey
MohammedA.A.Al-qanessetal.(2019)
https://doi.org/10.3390/s19153329
(This article belongs to the Section Intelligent Sensors)
Recently, wireless signals have been utilized to track human
motion and Human Activity Recognition (HAR) in indoor
environments. The motion of an object in the test
environment causes fluctuations and changes in the Wi-
Fi signal reflections at the receiver, which result in
variations in received signals. These fluctuations can be used
to track object (i.e., a human) motion in indoor environments.
This phenomenon can be improved and leveraged in the
future to improve the internet of things (IoT) and smart home
devices.
The main Wi-Fi sensing methods can be broadly
categorized as Received Signal Strength Indicator (RSSI),
Wi-Fi radar (by using Software Defined Radio (SDR)) and
Channel State Information (CSI). CSI and RSSI can be
considered as device-free mechanisms because they do
not require cumbersome installation, whereas the
Wi-Fi radar mechanism requires special devices (i.e.,
UniversalSoftwareRadioPeripheral(USRP)).
Recent studies demonstrate that CSI outperforms RSSI
in sensing accuracy due to its stability and rich
information. This paper presents a comprehensive survey
of recent advances in the CSI-based sensing mechanism
and illustrates the drawbacks, discusses challenges, and
presents some suggestions for the future of device-free
sensingtechnology.
Hybrid sensing methods. As already discussed in
previous sections, different techniques have different
limitations; body sensors attached to the user’s body
may be used to solve some limitations of current Wi-Fi
sensing systems. Therefore, combining body
sensors or smartphones with device-free Wi-
Fi-based methods into hybrid sensing technologies
needs to be addressed in future work. The first simple
attempt to combine CSI and wearable devices
was presented in Fangetal.2016 BodyScan Cited by 44
.
Moreover, CSI can play an important role in the
IoT; therefore, hybrid methods to apply CSI in
multimedia communications and IoT applications
can be addressed. Furthermore, Wireless Sensor
Network (WSN)schemescanbestudied.
BodyScan:Enablingradio-based
sensingonwearabledevicesfor
contactlessactivityandvitalsign
monitoring BiyiFang‡, Nicholas D. Lane†∗, MiZhang‡, Aidan
Boran†, Fahim Kawsar† ‡Michigan State University, †Bell Labs, University CollegeLondon∗
MobiSys'16 Proceedingsofthe 14th Annual International Conference on
MobileSystems, Applications, and Services
https://doi.org/10.1145/2906388.2906411  Citedby 44
For these reasons, we expect radio-based sensing to play an important role in the future evolution of wearable
devices, and hope the design and techniques of BodyScan can act as a useful foundation for the subsequent
investigations.
JointActivity Recognitionand Indoor
LocalizationWithWiFi Fingerprints
FeiWang ;JianweiFeng;YinliangZhao;Xiaobin Zhang;Shiyuan
Zhang;JinsongHan IEEEAccess( Volume:7,June2019)
https://doi.org/10.1109/ACCESS.2019.2923743
https://github.com/geekfeiw/apl
Recent years have witnessed the rapid development in
the research topic of WiFi sensing that automatically
senses human with commercial WiFi devices. Past work
fallsintotwomajor categories,i.e., activity recognition and
theindoor localization.
The key rationale behind WiFi sensing is that people
behaviors can influence the WiFi signal
propagation and introduce specific patterns into WiFi
signals, called WiFi fingerprints, which can be
further exploredtoidentifyhumanactivitiesandlocations.
In this paper, we propose a novel deep learning
framework for joint activity recognition and indoor
localization task using WiFi channel state information
(CSI) fingerprints. More precisely, we develop a system
running standard IEEE 802.11n WiFi protocol and
collect more than 1400 CSI fingerprints on 6
activities at 16 indoor locations. Then we propose a
dual-task convolutional neural network with one-
dimensional convolutional layers for the joint task of
activity recognition and indoor localization. The
experimental results and ablation study show that our
approach achieves good performances in this joint WiFi
sensingtask.
As shown in FIGURE 1, the first two figures are the top view and front view of the universal software radio peripheral
(USRP;EtussN201,£1,710),respectively.TheUSRP ismainlycomposedofamother board,adaughter boardanda
WiFi antenna, which is used to broadcast or receive WiFi signals under the control of GNU Radio (https://www.gnuradio.org/)
. The
detailsare listedbelow.Meanwhile,the assembling diagram isshown inFIGURE 2. 1. EtussN210s:A hardware withfield
programmable gate array (FPGA) that can be embedded IEEE 802.11n protocol to sendand receive WiFi packagesfor
CSI fingerprints. 2. Etuss Clock(https://www.ettus.com/all-products/OctoClock-G/, £1,680)
and synchronization cables: Synchronizing N210s
with GPS clock to avoid a WiFi phase shifting caused by the clock differences between two N210s. 3. Antennas: To
broadcast or receive WiFi signals under the control ofGNU Radio (https://www.gnuradio.org/)
4. Computers and Ethernet cables:
TocontrolN210swhenaresetinasamelocalareanetworkasN210s.
EvaluatingIndoorLocalization
Performanceonan IEEE802.11ac
Explicit-Feedback-BasedCSI Learning
System
TakeruFukushima;Tomoki Murakami;HiranthaAbeysekera;Shunsuke
Saruwatari;TakashiWatanabe
2019IEEE 89th Vehicular TechnologyConference (VTC2019-Spring)
https://doi.org/10.1109/VTCSpring.2019.8746628
There is a demand for device-free user
location estimation with high accuracy in order
to realize various indoor applications. This paper
proposes an IEEE 802.11ac explicit feedback-
based channel state information (CSI)
learning system which can be used for device-
free user location estimation. The proposed CSI
learning system captures CSI feedback from
off-the-shelf Wi-Fi devices and extracts
624 features from a CSI feedback frame
definedinIEEE 802.11ac.
We evaluated the proposed system using location
estimationwith six patterns: differentcombinations
of device-free user movement and access point
antenna orientation. The evaluation results show
that the machine learning based localization
achieves approximately 96% accuracy for
seven positions of the user, and the
divergence of CSI improves localization
performance.
The finding is interesting: the divergence of CSI
improves machine learning performance. Previous
studies such as PhaseFi [14] and PADS [15] have used
linear transformation for localization, and described
that stable phases improve localization accuracy
LowHuman-Effort,Device-Free
LocalizationwithFine-Grained
SubcarrierInformation Ju Wanget al. (2018)
IEEE Transactionson Mobile Computing( Volume:17 , Issue:11, Nov. 1
2018 ) https://doi.org/10.1109/TMC.2018.2812746
Device-free localization of objects not equipped with RF
radios is playing a critical role in many applications. This
paper presents LIFS, a Low human-effort, device-free
localization system with fine-grained subcarrier
information, which can localize a target accurately
without offline training. The basic idea is simple:
channel state information (CSI) is sensitive to a target's
location and thus the target can be localized by modelling
the CSI measurements of multiple wireless links. However,
due to rich multipath indoors, CSI can not be easily
modelled.
Todealwiththischallenge,our keyobservationisthat even
in a rich multipath environment, not all subcarriers
are affected equally by multipath reflections. Our
CSI pre-processing scheme triesto identify the subcarriers
not affected by multipath. Thus, CSI on the “clean”
subcarriers can still be utilized for accurate localization.
Without the need of knowing the majority transceivers'
locations, LiFS achieves a median accuracy of 0.5 m
and 1.1 m in line-of-sight (LoS) and non-line-of-sight
(NLoS) scenarios, respectively, outperforming the state-
of-the-artsystems.
We design, implement and evaluate LiFS against the
existing Pilot, RASS and RTI systems. Real-world
experiments demonstrate that LiFS outperforms the three
state-of-the-artsystems.
WiFiSensingwithChannelState
Information:ASurvey
Yongsen Ma,GangZhou,Shuangquan Wang
ACMComputingSurveys(CSUR)Surveys
Volume52Issue3,July2019
https://doi.org/10.1145/3310194
Different WiFi sensing algorithms and signal
processing techniques have their own advantages
and limitations and are suitable for different WiFi
sensing applications. The survey groups CSI-based
WiFi sensing applications into three categories,
detection, recognition, and estimation, depending on
whether the outputs are binary/multi-class
classifications or numerical values. With the
development and deployment of new WiFi
technologies, there will be more WiFi sensing
opportunities wherein the targets may go beyond
from humansto environments, animals, andobjects.
The survey highlights three challenges for WiFi
sensing: robustness and generalization,
privacy and security, and coexistence of WiFi
sensing and networking. Finally, the survey
presents three future WiFi sensing trends, i.e.,
integrating cross-layer network information, multi-
device cooperation, and fusion of different
sensors, for enhancing existing WiFi
sensing capabilities and enabling new WiFi
sensingopportunities.
[1] Heba Abdelnasser, Khaled A. Harras, and Moustafa Youssef.
2015. UbiBreathe: A Ubiquitous non-invasive WiFi-based
breathing estimator. In Proceedings of the 16th ACM
International Symposium on Mobile Ad Hoc Networking and
Computing (MobiHoc’15). 277–286.
https://doi.org/10.1145/2746285.2755969
[56] Jian Liu, Yan Wang, Yingying Chen, Jie Yang, Xu Chen, and
Jerry Cheng. 2015. Tracking vital signs during sleep leveraging
off-the-shelf WiFi. In Proceedings of the 16th ACM International
Symposium on Mobile Ad Hoc Networking and Computing
(MobiHoc’15). 267–276. https://doi.org/10.1145/2746285.2746303
[100] Xuyu Wang, Chao Yang, and Shiwen Mao. 2017. PhaseBeat:
Exploiting CSI phase data for vital sign monitoring with commodity
WiFi devices. In Proceedings of the 2017 IEEE 37th International
Conference on Distributed Computing Systems (ICDCS’17). 1230–
1239. https://doi.org/10.1109/ICDCS.2017.206
[120] Fu Xiao, Jing Chen, Xiao Hui Xie, Linqing Gui, Juan Li Sun, and
Wang Ruchuan. 2018. SEARE: A system for exercise activity
recognition and quality evaluation based on green sensing.
IEEE Trans. Emerg. Top. Comput. (Early Access) (2018).
https://doi.org/10.1109/TETC.2018.2790080
New WiFi sensing algorithms are also required to take full advantage of multi-domain
information with time, spatial, and user dependence. New coordination algorithms are
necessary for extracting useful information from different domains. Since CSI has some unique
properties such as low spatial resolution and sensitive to environmental changes, it is
crucial for WiFi sensing algorithms to be robust in different scenarios. Most existing deep
learning solutions of WiFi sensing reuse DNNs for images and videos. It is necessary to find
suitable DNN types and develop new DNNs specifically designed for CSI data. For
cross-sensor WiFi sensing, pre-trained DNNs for other sensors can be used for automatic
labeling of CSI data. Transfer learning, teacher–student network training, and reinforcement
learning can also be used to reduce network training efforts. WiFi sensing is very easy to
be used for malicious purposes, since WiFi signals can be passively transmitted
through walls and are not limited to lighting conditions. Generative Adversarial Networks
(GANs) can be used to generate fake WiFi signal patterns to prevent from malicious WiFi
sensing
WiFi-
Sensing
for other vital
signals
or whetherthey
are artifacts to
actual
movement
analysis?
WiFiworksforrespiratoryandheartratesensingtoo.
Sleepapneatracking?CombinewithaudiotrackingwithAmazonAlexa?
ECGelectrocardiography
/BCGballistocardiography
relationship
Heart-braininteractionsintheMRenvironment:characterizationofthe
ballistocardiograminEEGsignalscollectedduringsimultaneousfMRI
Marco Marino, Quanying Liu, Mariangela Del
Castello, Cristiana Corsi, Nicole Wenderoth, Dante Mantinido
Nowpublishedin BrainTopography doi: 10.1007/s10548-018-0631-1
https://slideplayer.com/slide/7859166/
Anear-worncontinuousballistocardiogram (BCG)sensor for
cardiovascularmonitoring
David DaHe, Eric S. Winokur, CharlesSodiniAnnual International Conference ofthe
IEEE 2012 DOI:10.1109/EMBC.2012.6347123
RespirationSensing
Forhospitalizedpatients,easytothrowinamattress-basedsensor, another
thingifthatisthatusefulintheend
Somecommercialoptionsexist noneed to build anything. Theydonot reallymeasure sleep
well, butmaybeok for respiration+RR intervals?
https://doi.org/10.1007/978-3-319-78759-6_32
Novel.http://www.novel.de/novelcontent/pliance/wheelchair Tekscan.
https://www.tekscan.com/ PressureProfile.https://pressureprofile.com/
Texisense.http://www.texisense.com/capteur_en SensorsProducts.
http://www.sensorprod.com/dynamic/mattress.php Emfit.
http://www.safetysystemsdistribution.co.uk/bed-exit-alarms/ Emfit.
https://www.emfitqs.com/ Murata.https://www.murata.com/ EarlySense.
https://www.earlysense.com/
Foranalogy
see PLR*Pupillary Light Reflex
Youseeheartrate
andrespirationin
pupilsize
https://www.slideshare.net/PetteriTeikariPhD
/hyperspectral-retinal-imaging
If youwantedtoseenonlinear dynamics youmay wantto try to
separateANSfromotherpathologicalprocesses?
Nonlinearanalysisofpupillarydynamics 
https://www.researchgate.net/publication/28091
9208_Nonlinear_analysis_of_pupillary_dynamics
accessedJul132018
"Our results suggest that the pupillary dynamics are
modulated at different time scales by processes and/or
systems in a similar way as the cardiovascular system,
confirming the hypothesis of a similar autonomic
modulation for different physiological systems [
Calcagniniet al. 2000]. These results are also consistent
with Yoshida et al. 1995, who reported for Spontaneous
Fluctuations of Pupil Dilation (SFPD) an f-1 spectral
characteristicatfrequencies f< 0.2 Hz"
Correlationsbetween the autonomicmodulationof
heart rate, blood pressure and the pupillary light
reflex inhealthy subjects Bär et al. (2009)
https://doi.org/10.1016/j.jns.2009.01.010
The interaction between pupilfunction and
cardiovascular regulation in patientswith acute
schizophrenia Bär etal. (2008)
https://doi.org/10.1016/j.clinph.2008.06.012
Infrared Camera-Based Non-contact Measurement
of Brain Activity From PupillaryRhythms Parkand
Whang(2018) https://doi.org/10.3389/fphys.2018.01400
Respiratory fluctuations inpupil
size by PBorgdorff -1975
https://www.ncbi.nlm.nih.gov/pubmed/1130509
-Citedby75
Preliminaryinvestigationofpupilsize
variability:towardnon-contact
assessmentofcardiovascular
variability
HungandZhang (2006)
https://doi.org/10.1109/ISSMDBS.2006.360118
Recent researches have discovered the presence of
heart rate variability (HRV) and blood pressure
variability (BPV) frequency components in pupil
size variability (PSV). Aims of this study are to
investigate effect of physical exercise on PSVspectrum,
and to address the feasibility of PSV monitoring.
Electrocardiogram, respiration effort, finger arterial
pressure, and pupil images were recorded from ten
subjects before and after exercise. Normalized high
frequency (0.15-0.5 Hz) power of PSV in seven subjects
and total (0.04-0.5 Hz) power of PSV in nine subjects
decreased after exercise, followed by an increase
duringrecovery.
The patterns of changes are generally paralleled by
corresponding indexes from spectra of HRV and BPV.
Preliminary results suggest that PSV has the
potential to be a novel physiological indicator
used in patient-monitoring.
Non-contactmeasurementof heart
responsereflectedinhumaneye
Park et al. (2018)
https://doi.org/10.1016/j.ijpsycho.2017.07.014
Doesa change inirisdiameter indicate heart rate
variability? Kocet al. (2018)
https://doi.org/10.1016/j.jns.2009.01.010
PupilScreen:UsingSmartphones to
Assess TraumaticBrainInjury
https://atm15.github.io/publications/pupilscreen/
ThenonlinearanalysisofPLR
wellit has always been sort of aniche
Spontaneous oscillationsinanonlineardelayed-feedback
shuntingmodelofthe pupillightreflex
P.C.BressloffandC.V.WoodPhys.Rev.E 58,3597–Published1September 1998
https://doi.org/10.1103/PhysRevE.58.3597
Hence, the experimentally motivated second-order delay equation presented in
this paper accounts for certain discrepancies of the previous first-order model
both in open-loop and closed-loop configurations. Furthermore, it allows one to
investigate the dependence of pupil light reflex dynamics on various
neurophysiologically important parameters of the system such as
the effective strength of neural connections and the time delay. These
parameters vary from patient to patient and extreme values can be an
indicatorofapathology.
Future work will investigate the effects of noise arising from the neural
components of the reflex arc, as well as details concerning photoreceptor
dynamics. In particular, the important role that photoreceptors play in light
adaptation will be investigated and contrasted with possible neural mechanisms
foradaptation. 
In conclusion, the pupil light reflex is an important paradigm for
nonlinear feedback control systems. Understanding the behavior of such
systems involves important mathematical questions concerning the properties
of differential equations with delays and noise, and could benefit the
clinician interested in developing diagnostic tests for detecting
neurological disorders. It is also hoped that the work will have applicationsin
other areassuchas respiratory and cardiac control.
Pupil unrest:an example of noise in abiolo
gical servomechanism
L Stark, FW Campbell, J Atwood - Nature, 1958 
Cited by130
Noiseand criticalbehaviorofthepupi
llight reflex at oscillationonset
A Longtin, JGMilton, JE Bos, MC Mackey - Physical Review A, 1990 
Cited by178
Spontaneous oscillationsin a nonlinearde
layed-feedbackshunting modelof the pup
illight reflex
PCBressloff, CVWood - Physical reviewE, 1998 
Cited by20
Nonlinear dynamicsin physiologyand me
dicine
A Beuter, L Glass, MCMackey, MSTitcombe-
2003  Cited by 145
How dospontaneouspupillary oscillations
in light relate to light intensity?
M Warga, H Lüdtke, H Wilhelm, B Wilhelm -Vision
research, 2009- Elsevier Cited by 33
Foranalogy see
EEG/ fMRI*
electroencephalography
Functional MRI
magnetic resonance imaging
EEGExample
Artifactsandnoiseremovalfor
electroencephalogram(EEG): A
literaturereview ChiQinLaietal.(2018)
https://doi.org/10.1109/ISCAIE.2018.8405493
Raw EEG may be contaminated with unwanted components
such as noises and artifacts caused by power source,
environment, eye blinks, heart rate and muscle movements,
which are unavoidable. These unwanted components will effect
the analysis of EEG and provide inaccurate information.
Therefore, researchers have proposed all kind of approaches to
eliminate unwanted noises and artifacts from EEG. In this paper,
a literature review is carried out to study the works that have
been done for noise and artifact removal from year 2010 up to
the present. It is found that conventional approaches include
ICA, wavelet based analysis, statistical analysis and others.
However, the existing ways of artifacts removal cannot eliminate
certain noise and will cause information lost by directly discard
the contaminated components. From the study, it is shown that
combination of conventional with other methods is popularly
used, as it is able to improve the removal of artifacts. The current
trend of artifacts removal makes use of machine learning to
provide an automated solution withhigher efficiency.
Useindependent component analysis(ICA)toremoveECGartifacts
http://www.fieldtriptoolbox.org/example/use_independent_component_analysis_ica_to_remove_ecg_artifacts/
EEGartifactremovalwithBlindSourceSeparation
https://sccn.ucsd.edu/~jung/Site/EEG_artifact_removal.html
SimultaneousEEG-fMRIExample
BallistocardiogramArtifactReductionin
SimultaneousEEG-fMRIusing Deep
Learning JamesR.McIntoshetal.(15Oct2019)
https://doi.org/10.1109/ISCAIE.2018.8405493
The concurrent recording of electroencephalography (EEG) and
functional magnetic resonance imaging(fMRI) isa technique that
has received much attention due to its potential for combined
high temporal and spatial resolution. However, the
ballistocardiogram (BCG), a large-amplitude artifact caused
by cardiac induced movement contaminates the EEG during
EEG-fMRI recordings. In this paper, we present a novel method
for BCG artifact suppression using recurrent neural networks
(RNNs).
One general difficulty in assessing BCG suppression quality is that the
ground truth BCG signal is unknown. Unlike other current
methods, the core of our method is designed to directly generate BCG
from ECG. It is therefore possible to see how BCGNet might be used to
simulate ground truth BCG signal, which can be augmented with
simulated 1/f noise and brain derived sources in order to study the
relative efficacy of other BCG suppression techniques. Furthermore, via
extensions of BCGNet that either model the propagation of the ECG
signal itself, or by directly injecting signal into the small central dense
layer of the network, it may be possible to gain fine grained control of the
BCG construction under test, for example, to study the different
methodsunder changingheart-rateconditions
Well.. all
biosignals
basically have
mixed
components,and
youneedtounmix
them
‘CocktailPartyProblem’ hereandeverywhere
Variational Autoencodersand
Nonlinear ICA:A UnifyingFramework
IlyesKhemakhemGatsbyComputationalNeuroscienceUnit,UCL;
DiederikP.Kingma,GoogleBrain; AapoHyvärinen,INRIA-Saclay,Deptof
CS, UniversityofHelsinki (10Jul2019)
https://arxiv.org/abs/1907.04809
The advantage of the new framework over
typical deep latent-variable models used with
VAEs is that we actually recover the original
latents, thus providing principled
"disentanglement". On the other hand, the
advantages of this algorithm for solving
nonlinear ICA over are several; briefly, we
obtain the likelihood and can use MLE, we
learn a forward model as well and can generate
new data, and we consider the more general
cases of noisy data with fewer components,
and even discretedata.
Independent component analysis: algorithms and applications
A Hyvärinen, E Oja Neural networks 2000 13 (4-5), 411-430
cited by Cited by 18,114 articles
Unsupervised Feature Extraction byTime-
Contrastive Learningand NonlinearICA
AapoHyvärinen, Hiroshi Morioka
Published inNIPS2016
https://arxiv.org/abs/1605.06336 - Citedby45 
Throwin
extra
sensors?
-Microphone
-PyroelectricIR
Usingmicrophonesformovingobjects BedsideAlexa/GoogleHome
LocatingMovingObjectsUsingStereo
Sound InsteadofVisualInput
SyncedNov42019
https://medium.com/syncedreview/locating-moving-objects-
using-stereo-sound-instead-of-visual-input-eccaca0ab899
Self-supervisedMovingVehicleTrackingwithStereo
Sound ChuangGan, HangZhao,PeihaoChen,DavidCox,
AntonioTorralba
MITCSAIL,MIT-IBMWatsonAILab,IBMResearchAI
https://arxiv.org/abs/1910.11760
Microphone(arrays)usefulatleastforartifactrejection andsleepapnea?
https://www.verywellhealth.com/what-to-expect-in-a-sleep-study-3015121
Snoringclassified:TheMunich-PassauSnoreSound Corpus
January2018-ComputersinBiologyandMedicine 94
http://doi.org/10.1016/j.compbiomed.2018.01.007
Projects: AutomaticGeneralAudioSignalClassification
SnoreSoundClassification,byChristophJanottetal.
Recentdevelopmentofrespiratoryratemeasurementtechnologies
https://doi.org/10.1088/1361-6579/ab299e
https://www.cpap.com/blog/
wearables-detect-sleep-apn
ea-apple-fitbit-garmin/
PyroelectricIR(PIR)Sensing
Sametechaslow-costoccupancysensingforlightingforexample(PIRMotionSensor)
EnablingCognitivePyroelectricInfraredSensing:
FromReconfigurableSignalConditioningtoSensor
MaskDesign
RuiMa;JiaqiG1ong;GuochengLiu;QiHao
IEEETransactionsonIndustrialInformatics (30Sept2019)
https://doi.org/10.1109/TII.2019.2944700
Poor signal-to-noise ratios (SNRs) and low spatial resolutions
have impeded low-cost pyroelectric infrared (PIR)
sensors from many intelligent applications for thermal target
detection/recognition. This paper presents a Cognitive
Signal Conditioning & Modulation Learning (CSCML)
framework for PIR sensing with two innovations to solve these
problems: 1) a reconfigurable signal conditioning circuit design
to achieve high SNRs; 2) an optimal sensor mask design to
achieve high recognition performance. By using a
Programmable System on Chip (PSoC), the PIR signal
amplifier gain and filter bandwidth can be adjusted
automatically according to working conditions. Based on the
modeling between PIR physics and thermal images, sensor
masks can be optimized through training
Convolution Neural Networks (CNNs) with large thermal
image datasets for feature extraction of specific thermal
targets. The experimental results verify the improved
performance of PIR sensors in variousworking conditions and
applications by using the developed reconfigurable circuit and
application-specificmasks.
Ground
Truth?
fromVicon Motion
Capture, or laser
scanner
Youwantto knowifyour
measurements and
modeloutput are
actually accurate and
useful
Ideafordatasetcreationpipelinerequirements
Wearables,BiomechanicalFeedback,and
HumanMotor-Skills’Learning&Optimization
XiangZhang,GongbingShan,YeWang,BingjunWan
andHuaLi
Appl.Sci.2019,9(2),226;
https://doi.org/10.3390/app9020226
It is well known that, among all human physical activities,
sports and arts skills exhibit the most diversity of motor
control. The datasets that are available for developing
deep learning models have to reflect the diversity,
because the depth and specialization must come from
training the deep learning algorithms with the massive
and diverse data collected from sports and arts motor
skills.
Therefore,at present,the vitalstep for developingreal-
time biomechanical feedback tool is to
simultaneously collect alargeamountofmotiondata
using both 3D motion capture (e.g., the two-chain
model with ~40 markers) and wearable IMUs (e.g., the
samemodelwithsixIMUs).
The datasets should cover large variety of sports
skills and arts performances. As such, the 3D
motion-capture data can be served as a
“supervisor” for training network model to map
IMUs data to joints’ kinematic data. Such a deep learning
model could be universally applied in motor learning and
thetraining ofsportsandartsskills.
Machineand deeplearningforsport-specificmovementrecognition:asystematicreview
of modeldevelopmentandperformance AnargyrosWilliamMcNally,Alexander Wong,John
McPheehttps://doi.org/10.1080/02640414.2018.1521769
Multimodal MeasurementRigforactionrecognition They recordedhuman activitieswithaRGB360deg,Lidar
andRGB-Datthesame time.Nicetherig,otherwisemaybenotsonice.Thismultimodaldataset isthefirstof itskindto
bemadeopenlyavailableandcanbeexploited formanyapplicationsthatrequireHAR,includingsports
analytics,healthcareassistanceandindoorintelligentmobility. https://arxiv.org/pdf/1901.02858.pdf
Multimodalroomforteasingoutwhatmodalitiesmatter?
SensorDataAcquisitionandMultimodal
SensorFusionforHumanActivityRecognition
UsingDeepLearning
Sensors2019,19(7),1716;
SW·ContentsBasicTechnologyResearchGroup,Electronicsand
TelecommunicationsResearchInstitute,Daejeon
https://doi.org/10.3390/s19071716
We adopt a two-level ensemble model to
combine class-probabilities of multiple
sensor modalities, and demonstrate that a
classifier-level sensor fusion technique can
improve the classification performance. By
analyzing the accuracy of each sensor on
different types of activity, we elaborate
custom weights for multimodal sensor
fusion that reflect the characteristic of
individual activities
AccuracyofWiFimightbeenoughforsometasks,whileforothersyouneed
IntelRealSense(orsomebedsensor)
DeepLearning forMusculoskeletal
ForcePrediction
https://doi.org/10.1007/s10439-018-02190-0
Departmentof Bioengineering,ImperialCollegeLondon,
LondonUK
"The dataset comprised synchronously captured kinematic (lower
limb marker trajectories obtained byoptoelectronic capture—Vicon MX
system, Vicon Motion Systems Ltd, Oxford, UK), force plate (ground
reaction force and centre of pressure—Kistler Instrumente AG,
Winterthur, Switzerland) and EMG (Trigno Wireless EMG system,
Delsys, USA) data from 156 subjects during multiple trials of level
walking"
Softrobotperceptionusing
embeddedsoftsensorsand
recurrentneuralnetworks
ThomasGeorgeThuruthel,
BenjaminShih,CeciliaLaschiand
MichaelThomasTolleyScience
Robotics 30Jan2019:Vol.4,Issue
26,eaav1488
DOI:10.1126/scirobotics.aav1488
Low-costOptiTrack vs.
High-EndVicon
https://github.com/motionlab-mogi-bme/Applicatio
n-of-OptiTrack-motion-capture-systems-in-human
-movement-analysis
Anovelvalidationandcalibrationmethodfor
motioncapturesystemsbasedonmicro-
triangulationGergelyNagymáté,TamásTuchband,
RitaM.Kiss Motion AnalysisLaboratoryoftheDepartmentof Mechatronics,Opticsand
MechanicalEngineeringInformaticsattheBudapestUniversityofTechnologyandEconomicsin Hungary
https://doi.org/10.1016/j.jbiomech.2018.04.009
Our study aimed to analyse the absolute volume
accuracy of optical motion capture systems
by means of engineering surveying reference
measurement of the marker coordinates (uncertainty:
0.75mm). The method is exemplified on an 18
camera OptiTrack Flex13 motion capture system.
The absolute accuracy was defined by the root mean
square error (RMSE) between the coordinates
measured by the camera system and by engineering
surveying(micro-triangulation).
A simply feasible but less accurate absolute accuracy
compensation method using tape measure on large
distances was also tested, which resulted in similar
scaling compensation compared to the surveying
method or direct wand size compensation by a high
precision 3D laser scanner [Leica TS15i 1" total
stations (angular accuracy: 1”); ATOS II Triple Scan
MV320].
Low-costOptiTrack vs.
High-EndVicon
https://doi.org/10.1016/j.jbiomech.2018.04.009 (2018):
“The use of low cost optical motion capture (OMC) multi-
camera systems is spreading in the fields of
biomechanics research (Hicheur etal.,2016) and
rehabilitation(Chungetal.,2016).
Summary of accuracy evaluation studies Different OMC
systems are sometimes validated using Vicon camera
systems (Vicon Motion Systems Ltd, Oxford, UK), which
are regarded as the gold standard in scientific
applications(Eharaetal.,1997,1995).
https://doi.org/10.1016/0966-6362(95)99067-U https://doi.org/10.1016/S0966-6362(96)01093-4
The accuracy and processing time of 11
commercially available 3D camera systems
were tested to evaluate their performance in
clinical gait evaluation. The systems tested
were Ariel APAS, Dynas 3D/h, Elite Plus,
ExpertVision, PEAK5, PRIMAS, Quick MAG,
VICON 140, VICON 370, color Video Locus
andreflectiveVideoLocus.
Accuracy and processing time of commercially
available 3D camera systems for clinical gait
measurement were measured. Tested systems
were: Quick MAG, Video Locus, Peak 5, Ariel,
Vicon 370, Elite, Kinemetrix 3D, and Optotrack
3020
AffordableOpticalMotionCapture vs.Vicon“GroundTruth”
Affordableclinicalgaitanalysis:Anassessmentof
themarkertrackingaccuracyof anewlow-cost
optical3Dmotionanalysissystem
BruceCarse,BarryMeadows,RoyBowers,PhilipRowe(2013)
https://doi.org/10.1016/j.physio.2013.03.001
Citedby88 -Relatedarticles
Arigidcluster offour reflectivemarkerswasusedtocomparea
low-cost Optitrack 3D motion analysis system against two
more expensive systems (Vicon 612 and Vicon MX).
Accuracy was measured by comparing the mean vector
magnitudes (between each combination of markers) for each
system.
There are a number of shortcomings of optical 3D
motion analysis systems; cost of equipment, time required
and expertise to interpret results. While it does not address all
of these problems, the Optitrack system provides a low-cost
solution that can accurately track marker trajectories to a level
comparable with an older and widely used higher cost system
(Vicon 612). While it cannot be considered to be a complete
clinical motion analysis solution, it does represent a positive
step towards making 3DGA more accessible to wider
researchandclinicalaudiences.
Next-GenerationLow-CostMotionCaptureSystemsCanProvideComparableSpatial
AccuracytoHigh-EndSystems
DominicThewlis,ChrisBishop,NathanDaniell,GuntherPaule(2013)
https://doi.org/10.1123/jab.29.1.112
Citedby49 -Relatedarticles
We assessed static linear accuracy, dynamic linear accuracy and compared gait kinematics from a
Vicon MX-f20 system to a Natural Point OptiTrack system. In all experiments data were
sampled simultaneously. We identified both systems perform excellently in linear accuracy tests with
absolute errors not exceeding 1%. In gait data there was again strong agreement between the two
systems in sagittal and coronal plane kinematics. Transverse plane kinematics differed by up to 3° at
the knee and hip, which we attributed to the impact of soft tissue artifact accelerations on the data.
We suggest that low-cost systems are comparably accurate to their high-end
competitors and offer a platform with accuracy acceptable in research for laboratories with a
limitedbudget.
Further work is required to explore the absolute angular
accuracy of the systems and their susceptibility to high
accelerations associated with soft tissue artifact; however, it is
likely that differences of this magnitude might be evident between
competing high-end solutions. We must also begin to explore
analog integration or synchronization with low-cost
systems, as inaccuracies here could impact significantly when
calculating jointmomentsand powersusing inversedynamics
IMUsvs. Goniometer groundtruth
Predictivetrajectoryestimationduringrehabilitativetasksin
augmentedrealityusinginertialsensors
ChristopherL.Hunt;AvinashSharma;LukeE.Osborn;RahulR.Kaliki;
NitishV.Thakor DepartmentofBiomedicalEngineering, Johns Hopkins University / Infinite Biomedical Technologies
2018 IEEE Biomedical Circuits and SystemsConference (BioCAS)
https://doi.org/10.1109/BIOCAS.2018.8584805
This paper presents a wireless kinematic tracking framework used
for biomechanical analysis during rehabilitative tasks in augmented and
virtual reality. The framework uses low-cost inertial measurement units
and exploits the rigid connections of the human skeletal system to provide
egocentric position estimates of joints to centimeter accuracy. On-board
sensor fusion combines information from three-axis accelerometers,
gyroscopes,andmagnetometerstoproviderobustestimatesinreal-time.
Sensor precision and accuracy were validated using the root mean square
error of estimated joint angles against ground truth goniometer high-
precision stepper motor with a 0.9◦step size (NEMA, Rosslyn, VA)
measurements. The sensor
network produced a mean estimate accuracy of 2.81° with 1.06°
precision,resultinginamaximumhandtrackingerrorof 7.06cm.
As an application, the network is used to collect kinematic information from
an unconstrained object manipulation task in augmented reality, from
which dynamic movement primitives are extracted to characterize natural
task completion in N = 3 able-bodied human subjects. These primitives are
then leveraged for trajectory estimation in both a generalized and a subject-
specific scheme resulting in 0.187 cm and 0.161 cm regression
accuracy, respectively. Our proposed kinematic tracking network is
wireless,accurate,and especiallyusefulfor predicting voluntaryactuation in
virtualandaugmentedrealityapplications.
An overview of a rehabilitation session. (A) The individual uses an augmented
reality headset to receive kinematic tasks to complete. Tasks consist of
transporting an object to and from different quadrants while possibly changing
its orientation. Sensorized tracking nodes {nRF51822 microcontroller (Nordic Semiconductor via
RedBearLab) with MPU9250 9-axis IMU with Mahony complementary filter [protocol Nordic Enhanced ShockBurst]}
are
rigidly affixed to the anatomical landmarks and are used to record multijoint
trajectories for primitive construction. (B) Once computed, these primitives are
used to predict natural, user-specific hand trajectories in subsequent
tasks. These predicted trajectories can then be rendered by the headset to
serveas anoptimalreferencefortheuser.
GoldStandardBenchmarking IMU vs. OpticalCapture
Asensor-to-segmentcalibrationmethodformotion
capturesystembasedonlowcostMIMU
NamcholChoe,HongyuZhao,SenQiu,YonggukSo
MeasurementVolume131,January2019,Pages490-500
https://doi.org/10.1016/j.measurement.2018.07.078
A sensor-to-segment calibration method for motion
capture system is proposed. Calibration principle,
procedure and program are listed. Positions of the
magnetometer correction are determined. Influence of the
magnetic and inertial measurement units (MIMU) mounting
position is evaluated. Effectiveness of the proposed method is
validatedbyopticaldevice (NDIPolarisSpectraSystem).
 Coordinate
systemsin
body and vectors
of body
segments. (a)
Body local
coordinate
system (BLCS)
and body
segment
coordinate
system (BSCS),
(b) Vectorsof
bodysegments.
Asensorfusionapproachforinertialsensorsbased3Dkinematicsand
pathologicalgaitassessments:towardanadaptivecontrolof stimulationin
post-strokesubjects
B.Sijobert;F.Feuvrier;J.Froger;D.Guiraud;C.AzevedoCoste
https://doi.org/10.1109/EMBC.2018.8512985(2018)
Pathological gait assessment and assistive control based on functional electrical
stimulation (FES) in post-stroke individuals, brings out a common need to robustly quantify
kinematics facing multiple constraints. This study proposes a novel approach using inertial
sensors to compute dorsiflexion angles and spatio-temporal parameters, in order to be later used
as inputs for online close-loop control of FES. 26 post-stroke subjects were asked to walk on a
pressure mat equipped with inertial measurement units (IMU) and passive reflective
markers. A total of 930 strides were individually analyzed and results between IMU-based
algorithms and reference systems compared. Mean absolute (MA) errors of dorsiflexion
angles were found to be less than 4°, while stride lengths were robustly segmented and
estimated with a MA error less than 10 cm. These results open new doors to rehabilitation using
adaptiveFESclosed-loopcontrolstrategies in “footdrop”syndromecorrection.
Soft-tissue Artifact(STA) human body toosoftasmetrological platform
if you start throwing IMUs to the body
Quantificationofsofttissueartifactinlowerlimb
humanmotionanalysis:Asystematicreview
AlanaPeters,Brook Galna,MorganSangeux,MegMorris,
RichardBakerGait& PostureVolume 31, Issue 1, January2010, Pages1-8
https://doi.org/10.1016/j.gaitpost.2009.09.004
Citedby221 -Relatedarticles
Conflict of interest A/Prof Richard Baker and Dr Morgan Sangeux receive
research fundingfrom Vicon (Oxford, UK).
ASimpleAlgorithmforAssimilatingMarker-BasedMotionCaptureData
DuringPeriodicHumanMovementIntoModelsofMulti-Rigid-Body
SystemsYasuyukiSuzuki,TakuyaInoue,andTaishinNomura
FrontBioengBiotechnol.2018;6: 141.Publishedonline2018Oct18. 
doi: 10.3389/fbioe.2018.00141
Here we propose a simple algorithm for assimilating motion capture data during
periodic human movements, such as bipedal walking, into models of multi-rigid-
body systems in a way that the assimilated motions are not affected by STA. The
proposed algorithm assumes that STA time-profiles during periodic movements are
also periodic. We then express unknown STA profiles using Fourier series,
and show that the Fourier coefficients can be determined optimally based solely on
the periodicity assumption for the STA and kinematic constraints requiring that
any two adjacent rigid-links are connected by a rotary joint, leading to the
STA-freeassimilatedmotionthatisconsistentwiththemulti-rigid-link model.
Rigid seven-link model of human walking. (A) Positions of landmarks and rigid
seven-link model of human body. Rigid seven-link model consists of Head-Arm-Trunk
link (HAT), left and right Thigh links (l/r-T), left and right Shank links (l/r-S), and left and right
Foot links (l/r-F). Blue circles represent landmarks of each link, and each landmark
correspondstoanatomicallandmarkofhumanbody
Jointkinematicsestimationusingamulti-bodykinematicsoptimisation
andanextendedKalmanfilter,andembeddingasofttissueartefact
modelVincentBonnetetal.-Citedby7 -Relatedarticles
JournalofBiomechanicsVolume62,6September 2017,Pages148-1558
https://doi.org/10.1016/j.jbiomech.2017.04.033
To reduce the impact of the soft tissue artefact (STA) on the estimate of skeletal
movement using stereophotogrammetric and skin-marker data, multi-body
kinematics optimisation(MKO) and extendedKalmanfilters (EKF) have
been proposed.  Embedding the STA model in MKO and EKF reduced the
average RMSof markertracking from 12.6to1.6mm andfrom 4.3to1.9mm,
respectively,showingthataSTAmodeltrial-specificcalibrationisfeasible.
You could look now all the
literature on spatio-temporal
tracking (pedestrians, sports,
autonomous driving, GPS trajectory,
etc.) to constrain the possible
movementofIMU units
https://scholar.google.co.uk/scholar
?as_ylo=2015&q=spatio+temporal
+tracking+deep+learning&hl=en&a
s_sdt=0,5&authuser=1
Quantificationofthree-dimensionalsofttissueartifactsinthecaninehindlimb
duringpassivestiflemotion https://doi.org/10.1186/s12917-018-1714-7
Softtissueartifactcompensation
inkneekinematicsbymulti-body
optimization:Performanceof
subject-specifickneejoint
models(2015)
https://doi.org/10.1016/j.jbiomech
.2015.09.040
Soft-tissue Artifact(STA) human body toosoftasmetrological platform
if you start throwing IMUs to the body

Weitere ähnliche Inhalte

Was ist angesagt?

Design of lighting systems for animal experiments
Design of lighting systems for animal experimentsDesign of lighting systems for animal experiments
Design of lighting systems for animal experimentsPetteriTeikariPhD
 
Practical Considerations in the design of Embedded Ophthalmic Devices
Practical Considerations in the design of Embedded Ophthalmic DevicesPractical Considerations in the design of Embedded Ophthalmic Devices
Practical Considerations in the design of Embedded Ophthalmic DevicesPetteriTeikariPhD
 
Pupillometry Through the Eyelids
Pupillometry Through the EyelidsPupillometry Through the Eyelids
Pupillometry Through the EyelidsPetteriTeikariPhD
 
Hyperspectral Retinal Imaging
Hyperspectral Retinal ImagingHyperspectral Retinal Imaging
Hyperspectral Retinal ImagingPetteriTeikariPhD
 
Deep Learning for Biomedical Unstructured Time Series
Deep Learning for Biomedical  Unstructured Time SeriesDeep Learning for Biomedical  Unstructured Time Series
Deep Learning for Biomedical Unstructured Time SeriesPetteriTeikariPhD
 
Lighting design for Startup Offices
Lighting design for Startup OfficesLighting design for Startup Offices
Lighting design for Startup OfficesPetteriTeikariPhD
 
OCT Monte Carlo & Deep Learning
OCT Monte Carlo & Deep LearningOCT Monte Carlo & Deep Learning
OCT Monte Carlo & Deep LearningPetteriTeikariPhD
 
Emerging 3D Scanning Technologies for PropTech
Emerging 3D Scanning Technologies for PropTechEmerging 3D Scanning Technologies for PropTech
Emerging 3D Scanning Technologies for PropTechPetteriTeikariPhD
 
Beyond Broken Stick Modeling: R Tutorial for interpretable multivariate analysis
Beyond Broken Stick Modeling: R Tutorial for interpretable multivariate analysisBeyond Broken Stick Modeling: R Tutorial for interpretable multivariate analysis
Beyond Broken Stick Modeling: R Tutorial for interpretable multivariate analysisPetteriTeikariPhD
 
Future of Retinal Diagnostics
Future of Retinal DiagnosticsFuture of Retinal Diagnostics
Future of Retinal DiagnosticsPetteriTeikariPhD
 
Precision Physiotherapy & Sports Training: Part 1
Precision Physiotherapy & Sports Training: Part 1Precision Physiotherapy & Sports Training: Part 1
Precision Physiotherapy & Sports Training: Part 1PetteriTeikariPhD
 
Smartphone-powered Ophthalmic Diagnostics
Smartphone-powered Ophthalmic DiagnosticsSmartphone-powered Ophthalmic Diagnostics
Smartphone-powered Ophthalmic DiagnosticsPetteriTeikariPhD
 

Was ist angesagt? (15)

Design of lighting systems for animal experiments
Design of lighting systems for animal experimentsDesign of lighting systems for animal experiments
Design of lighting systems for animal experiments
 
Practical Considerations in the design of Embedded Ophthalmic Devices
Practical Considerations in the design of Embedded Ophthalmic DevicesPractical Considerations in the design of Embedded Ophthalmic Devices
Practical Considerations in the design of Embedded Ophthalmic Devices
 
Pupillometry Through the Eyelids
Pupillometry Through the EyelidsPupillometry Through the Eyelids
Pupillometry Through the Eyelids
 
Advanced Retinal Imaging
Advanced Retinal ImagingAdvanced Retinal Imaging
Advanced Retinal Imaging
 
Hyperspectral Retinal Imaging
Hyperspectral Retinal ImagingHyperspectral Retinal Imaging
Hyperspectral Retinal Imaging
 
Deep Learning for Biomedical Unstructured Time Series
Deep Learning for Biomedical  Unstructured Time SeriesDeep Learning for Biomedical  Unstructured Time Series
Deep Learning for Biomedical Unstructured Time Series
 
Lighting design for Startup Offices
Lighting design for Startup OfficesLighting design for Startup Offices
Lighting design for Startup Offices
 
OCT Monte Carlo & Deep Learning
OCT Monte Carlo & Deep LearningOCT Monte Carlo & Deep Learning
OCT Monte Carlo & Deep Learning
 
Emerging 3D Scanning Technologies for PropTech
Emerging 3D Scanning Technologies for PropTechEmerging 3D Scanning Technologies for PropTech
Emerging 3D Scanning Technologies for PropTech
 
Geometric Deep Learning
Geometric Deep Learning Geometric Deep Learning
Geometric Deep Learning
 
Beyond Broken Stick Modeling: R Tutorial for interpretable multivariate analysis
Beyond Broken Stick Modeling: R Tutorial for interpretable multivariate analysisBeyond Broken Stick Modeling: R Tutorial for interpretable multivariate analysis
Beyond Broken Stick Modeling: R Tutorial for interpretable multivariate analysis
 
Future of Retinal Diagnostics
Future of Retinal DiagnosticsFuture of Retinal Diagnostics
Future of Retinal Diagnostics
 
Data-driven Ophthalmology
Data-driven OphthalmologyData-driven Ophthalmology
Data-driven Ophthalmology
 
Precision Physiotherapy & Sports Training: Part 1
Precision Physiotherapy & Sports Training: Part 1Precision Physiotherapy & Sports Training: Part 1
Precision Physiotherapy & Sports Training: Part 1
 
Smartphone-powered Ophthalmic Diagnostics
Smartphone-powered Ophthalmic DiagnosticsSmartphone-powered Ophthalmic Diagnostics
Smartphone-powered Ophthalmic Diagnostics
 

Ähnlich wie RGB-D+RF-based sensing for human movement analysis

IRJET- Human Activity Recognition using Flex Sensors
IRJET- Human Activity Recognition using Flex SensorsIRJET- Human Activity Recognition using Flex Sensors
IRJET- Human Activity Recognition using Flex SensorsIRJET Journal
 
Smart Sound Measurement and Control System for Smart City
Smart Sound Measurement and Control System for Smart CitySmart Sound Measurement and Control System for Smart City
Smart Sound Measurement and Control System for Smart CityIRJET Journal
 
Nuzzer algorithm based Human Tracking and Security System for Device-Free Pas...
Nuzzer algorithm based Human Tracking and Security System for Device-Free Pas...Nuzzer algorithm based Human Tracking and Security System for Device-Free Pas...
Nuzzer algorithm based Human Tracking and Security System for Device-Free Pas...Eswar Publications
 
IRJET= Air Writing: Gesture Recognition using Ultrasound Sensors and Grid-Eye...
IRJET= Air Writing: Gesture Recognition using Ultrasound Sensors and Grid-Eye...IRJET= Air Writing: Gesture Recognition using Ultrasound Sensors and Grid-Eye...
IRJET= Air Writing: Gesture Recognition using Ultrasound Sensors and Grid-Eye...IRJET Journal
 
Wireless and uninstrumented communication by gestures for deaf and mute based...
Wireless and uninstrumented communication by gestures for deaf and mute based...Wireless and uninstrumented communication by gestures for deaf and mute based...
Wireless and uninstrumented communication by gestures for deaf and mute based...IOSR Journals
 
IRJET - Creating a Security Alert for the Care Takers Implementing a Vast Dee...
IRJET - Creating a Security Alert for the Care Takers Implementing a Vast Dee...IRJET - Creating a Security Alert for the Care Takers Implementing a Vast Dee...
IRJET - Creating a Security Alert for the Care Takers Implementing a Vast Dee...IRJET Journal
 
AudiNect: An Aid for the Autonomous Navigation of Visually Impaired People, B...
AudiNect: An Aid for the Autonomous Navigation of Visually Impaired People, B...AudiNect: An Aid for the Autonomous Navigation of Visually Impaired People, B...
AudiNect: An Aid for the Autonomous Navigation of Visually Impaired People, B...Waqas Tariq
 
Real Time Translator for Sign Language
Real Time Translator for Sign LanguageReal Time Translator for Sign Language
Real Time Translator for Sign Languageijtsrd
 
Hogeschool PXL Smart Mirror
Hogeschool PXL Smart MirrorHogeschool PXL Smart Mirror
Hogeschool PXL Smart MirrorVincent Claes
 
Critical analysis of radar data signal de noising by implementation of haar w...
Critical analysis of radar data signal de noising by implementation of haar w...Critical analysis of radar data signal de noising by implementation of haar w...
Critical analysis of radar data signal de noising by implementation of haar w...eSAT Journals
 
IRJET-V9I114.pdfA Review Paper on Economical Bionic Arm with Predefined Grasp...
IRJET-V9I114.pdfA Review Paper on Economical Bionic Arm with Predefined Grasp...IRJET-V9I114.pdfA Review Paper on Economical Bionic Arm with Predefined Grasp...
IRJET-V9I114.pdfA Review Paper on Economical Bionic Arm with Predefined Grasp...IRJET Journal
 
Research topics for EON Realty's Research Grant Program (RGP) v16
Research topics for  EON Realty's Research Grant Program (RGP) v16Research topics for  EON Realty's Research Grant Program (RGP) v16
Research topics for EON Realty's Research Grant Program (RGP) v16Senthilkumar R
 
FUSION OF GAIT AND FINGERPRINT FOR USER AUTHENTICATION ON MOBILE DEVICES
FUSION OF GAIT AND FINGERPRINT FOR USER AUTHENTICATION ON MOBILE DEVICESFUSION OF GAIT AND FINGERPRINT FOR USER AUTHENTICATION ON MOBILE DEVICES
FUSION OF GAIT AND FINGERPRINT FOR USER AUTHENTICATION ON MOBILE DEVICESvasim hasina
 
Raspberry Pi Augmentation: A Cost Effective Solution To Google Glass
Raspberry Pi Augmentation: A Cost Effective Solution To Google GlassRaspberry Pi Augmentation: A Cost Effective Solution To Google Glass
Raspberry Pi Augmentation: A Cost Effective Solution To Google GlassIRJET Journal
 
Hand Motion Gestures For Mobile Communication Based On Inertial Sensors For O...
Hand Motion Gestures For Mobile Communication Based On Inertial Sensors For O...Hand Motion Gestures For Mobile Communication Based On Inertial Sensors For O...
Hand Motion Gestures For Mobile Communication Based On Inertial Sensors For O...IJERA Editor
 
Daily Human Activity Recognition using Adaboost Classifiers on Wisdm Dataset
Daily Human Activity Recognition using Adaboost Classifiers on Wisdm DatasetDaily Human Activity Recognition using Adaboost Classifiers on Wisdm Dataset
Daily Human Activity Recognition using Adaboost Classifiers on Wisdm Datasetijtsrd
 
bazgir2020.pdf
bazgir2020.pdfbazgir2020.pdf
bazgir2020.pdfNISHRO
 
bazgir2020.pdf
bazgir2020.pdfbazgir2020.pdf
bazgir2020.pdfNISHRO
 

Ähnlich wie RGB-D+RF-based sensing for human movement analysis (20)

IRJET- Human Activity Recognition using Flex Sensors
IRJET- Human Activity Recognition using Flex SensorsIRJET- Human Activity Recognition using Flex Sensors
IRJET- Human Activity Recognition using Flex Sensors
 
Smart Sound Measurement and Control System for Smart City
Smart Sound Measurement and Control System for Smart CitySmart Sound Measurement and Control System for Smart City
Smart Sound Measurement and Control System for Smart City
 
Nuzzer algorithm based Human Tracking and Security System for Device-Free Pas...
Nuzzer algorithm based Human Tracking and Security System for Device-Free Pas...Nuzzer algorithm based Human Tracking and Security System for Device-Free Pas...
Nuzzer algorithm based Human Tracking and Security System for Device-Free Pas...
 
IRJET= Air Writing: Gesture Recognition using Ultrasound Sensors and Grid-Eye...
IRJET= Air Writing: Gesture Recognition using Ultrasound Sensors and Grid-Eye...IRJET= Air Writing: Gesture Recognition using Ultrasound Sensors and Grid-Eye...
IRJET= Air Writing: Gesture Recognition using Ultrasound Sensors and Grid-Eye...
 
Wireless and uninstrumented communication by gestures for deaf and mute based...
Wireless and uninstrumented communication by gestures for deaf and mute based...Wireless and uninstrumented communication by gestures for deaf and mute based...
Wireless and uninstrumented communication by gestures for deaf and mute based...
 
IRJET - Creating a Security Alert for the Care Takers Implementing a Vast Dee...
IRJET - Creating a Security Alert for the Care Takers Implementing a Vast Dee...IRJET - Creating a Security Alert for the Care Takers Implementing a Vast Dee...
IRJET - Creating a Security Alert for the Care Takers Implementing a Vast Dee...
 
50120130406043
5012013040604350120130406043
50120130406043
 
AudiNect: An Aid for the Autonomous Navigation of Visually Impaired People, B...
AudiNect: An Aid for the Autonomous Navigation of Visually Impaired People, B...AudiNect: An Aid for the Autonomous Navigation of Visually Impaired People, B...
AudiNect: An Aid for the Autonomous Navigation of Visually Impaired People, B...
 
Real Time Translator for Sign Language
Real Time Translator for Sign LanguageReal Time Translator for Sign Language
Real Time Translator for Sign Language
 
Hogeschool PXL Smart Mirror
Hogeschool PXL Smart MirrorHogeschool PXL Smart Mirror
Hogeschool PXL Smart Mirror
 
delna's journal
delna's journaldelna's journal
delna's journal
 
Critical analysis of radar data signal de noising by implementation of haar w...
Critical analysis of radar data signal de noising by implementation of haar w...Critical analysis of radar data signal de noising by implementation of haar w...
Critical analysis of radar data signal de noising by implementation of haar w...
 
IRJET-V9I114.pdfA Review Paper on Economical Bionic Arm with Predefined Grasp...
IRJET-V9I114.pdfA Review Paper on Economical Bionic Arm with Predefined Grasp...IRJET-V9I114.pdfA Review Paper on Economical Bionic Arm with Predefined Grasp...
IRJET-V9I114.pdfA Review Paper on Economical Bionic Arm with Predefined Grasp...
 
Research topics for EON Realty's Research Grant Program (RGP) v16
Research topics for  EON Realty's Research Grant Program (RGP) v16Research topics for  EON Realty's Research Grant Program (RGP) v16
Research topics for EON Realty's Research Grant Program (RGP) v16
 
FUSION OF GAIT AND FINGERPRINT FOR USER AUTHENTICATION ON MOBILE DEVICES
FUSION OF GAIT AND FINGERPRINT FOR USER AUTHENTICATION ON MOBILE DEVICESFUSION OF GAIT AND FINGERPRINT FOR USER AUTHENTICATION ON MOBILE DEVICES
FUSION OF GAIT AND FINGERPRINT FOR USER AUTHENTICATION ON MOBILE DEVICES
 
Raspberry Pi Augmentation: A Cost Effective Solution To Google Glass
Raspberry Pi Augmentation: A Cost Effective Solution To Google GlassRaspberry Pi Augmentation: A Cost Effective Solution To Google Glass
Raspberry Pi Augmentation: A Cost Effective Solution To Google Glass
 
Hand Motion Gestures For Mobile Communication Based On Inertial Sensors For O...
Hand Motion Gestures For Mobile Communication Based On Inertial Sensors For O...Hand Motion Gestures For Mobile Communication Based On Inertial Sensors For O...
Hand Motion Gestures For Mobile Communication Based On Inertial Sensors For O...
 
Daily Human Activity Recognition using Adaboost Classifiers on Wisdm Dataset
Daily Human Activity Recognition using Adaboost Classifiers on Wisdm DatasetDaily Human Activity Recognition using Adaboost Classifiers on Wisdm Dataset
Daily Human Activity Recognition using Adaboost Classifiers on Wisdm Dataset
 
bazgir2020.pdf
bazgir2020.pdfbazgir2020.pdf
bazgir2020.pdf
 
bazgir2020.pdf
bazgir2020.pdfbazgir2020.pdf
bazgir2020.pdf
 

Mehr von PetteriTeikariPhD

ML and Signal Processing for Lung Sounds
ML and Signal Processing for Lung SoundsML and Signal Processing for Lung Sounds
ML and Signal Processing for Lung SoundsPetteriTeikariPhD
 
Next Gen Ophthalmic Imaging for Neurodegenerative Diseases and Oculomics
Next Gen Ophthalmic Imaging for Neurodegenerative Diseases and OculomicsNext Gen Ophthalmic Imaging for Neurodegenerative Diseases and Oculomics
Next Gen Ophthalmic Imaging for Neurodegenerative Diseases and OculomicsPetteriTeikariPhD
 
Next Gen Computational Ophthalmic Imaging for Neurodegenerative Diseases and ...
Next Gen Computational Ophthalmic Imaging for Neurodegenerative Diseases and ...Next Gen Computational Ophthalmic Imaging for Neurodegenerative Diseases and ...
Next Gen Computational Ophthalmic Imaging for Neurodegenerative Diseases and ...PetteriTeikariPhD
 
Wearable Continuous Acoustic Lung Sensing
Wearable Continuous Acoustic Lung SensingWearable Continuous Acoustic Lung Sensing
Wearable Continuous Acoustic Lung SensingPetteriTeikariPhD
 
Precision Medicine for personalized treatment of asthma
Precision Medicine for personalized treatment of asthmaPrecision Medicine for personalized treatment of asthma
Precision Medicine for personalized treatment of asthmaPetteriTeikariPhD
 
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature SegmentationTwo-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature SegmentationPetteriTeikariPhD
 
Skin temperature as a proxy for core body temperature (CBT) and circadian phase
Skin temperature as a proxy for core body temperature (CBT) and circadian phaseSkin temperature as a proxy for core body temperature (CBT) and circadian phase
Skin temperature as a proxy for core body temperature (CBT) and circadian phasePetteriTeikariPhD
 
Summary of "Precision strength training: The future of strength training with...
Summary of "Precision strength training: The future of strength training with...Summary of "Precision strength training: The future of strength training with...
Summary of "Precision strength training: The future of strength training with...PetteriTeikariPhD
 
Precision strength training: The future of strength training with data-driven...
Precision strength training: The future of strength training with data-driven...Precision strength training: The future of strength training with data-driven...
Precision strength training: The future of strength training with data-driven...PetteriTeikariPhD
 
Intracerebral Hemorrhage (ICH): Understanding the CT imaging features
Intracerebral Hemorrhage (ICH): Understanding the CT imaging featuresIntracerebral Hemorrhage (ICH): Understanding the CT imaging features
Intracerebral Hemorrhage (ICH): Understanding the CT imaging featuresPetteriTeikariPhD
 
Hand Pose Tracking for Clinical Applications
Hand Pose Tracking for Clinical ApplicationsHand Pose Tracking for Clinical Applications
Hand Pose Tracking for Clinical ApplicationsPetteriTeikariPhD
 
Creativity as Science: What designers can learn from science and technology
Creativity as Science: What designers can learn from science and technologyCreativity as Science: What designers can learn from science and technology
Creativity as Science: What designers can learn from science and technologyPetteriTeikariPhD
 
Efficient Data Labelling for Ocular Imaging
Efficient Data Labelling for Ocular ImagingEfficient Data Labelling for Ocular Imaging
Efficient Data Labelling for Ocular ImagingPetteriTeikariPhD
 
Dashboards for Business Intelligence
Dashboards for Business IntelligenceDashboards for Business Intelligence
Dashboards for Business IntelligencePetteriTeikariPhD
 
Labeling fundus images for classification models
Labeling fundus images for classification modelsLabeling fundus images for classification models
Labeling fundus images for classification modelsPetteriTeikariPhD
 

Mehr von PetteriTeikariPhD (16)

ML and Signal Processing for Lung Sounds
ML and Signal Processing for Lung SoundsML and Signal Processing for Lung Sounds
ML and Signal Processing for Lung Sounds
 
Next Gen Ophthalmic Imaging for Neurodegenerative Diseases and Oculomics
Next Gen Ophthalmic Imaging for Neurodegenerative Diseases and OculomicsNext Gen Ophthalmic Imaging for Neurodegenerative Diseases and Oculomics
Next Gen Ophthalmic Imaging for Neurodegenerative Diseases and Oculomics
 
Next Gen Computational Ophthalmic Imaging for Neurodegenerative Diseases and ...
Next Gen Computational Ophthalmic Imaging for Neurodegenerative Diseases and ...Next Gen Computational Ophthalmic Imaging for Neurodegenerative Diseases and ...
Next Gen Computational Ophthalmic Imaging for Neurodegenerative Diseases and ...
 
Wearable Continuous Acoustic Lung Sensing
Wearable Continuous Acoustic Lung SensingWearable Continuous Acoustic Lung Sensing
Wearable Continuous Acoustic Lung Sensing
 
Precision Medicine for personalized treatment of asthma
Precision Medicine for personalized treatment of asthmaPrecision Medicine for personalized treatment of asthma
Precision Medicine for personalized treatment of asthma
 
Two-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature SegmentationTwo-Photon Microscopy Vasculature Segmentation
Two-Photon Microscopy Vasculature Segmentation
 
Skin temperature as a proxy for core body temperature (CBT) and circadian phase
Skin temperature as a proxy for core body temperature (CBT) and circadian phaseSkin temperature as a proxy for core body temperature (CBT) and circadian phase
Skin temperature as a proxy for core body temperature (CBT) and circadian phase
 
Summary of "Precision strength training: The future of strength training with...
Summary of "Precision strength training: The future of strength training with...Summary of "Precision strength training: The future of strength training with...
Summary of "Precision strength training: The future of strength training with...
 
Precision strength training: The future of strength training with data-driven...
Precision strength training: The future of strength training with data-driven...Precision strength training: The future of strength training with data-driven...
Precision strength training: The future of strength training with data-driven...
 
Intracerebral Hemorrhage (ICH): Understanding the CT imaging features
Intracerebral Hemorrhage (ICH): Understanding the CT imaging featuresIntracerebral Hemorrhage (ICH): Understanding the CT imaging features
Intracerebral Hemorrhage (ICH): Understanding the CT imaging features
 
Hand Pose Tracking for Clinical Applications
Hand Pose Tracking for Clinical ApplicationsHand Pose Tracking for Clinical Applications
Hand Pose Tracking for Clinical Applications
 
Creativity as Science: What designers can learn from science and technology
Creativity as Science: What designers can learn from science and technologyCreativity as Science: What designers can learn from science and technology
Creativity as Science: What designers can learn from science and technology
 
Light Treatment Glasses
Light Treatment GlassesLight Treatment Glasses
Light Treatment Glasses
 
Efficient Data Labelling for Ocular Imaging
Efficient Data Labelling for Ocular ImagingEfficient Data Labelling for Ocular Imaging
Efficient Data Labelling for Ocular Imaging
 
Dashboards for Business Intelligence
Dashboards for Business IntelligenceDashboards for Business Intelligence
Dashboards for Business Intelligence
 
Labeling fundus images for classification models
Labeling fundus images for classification modelsLabeling fundus images for classification models
Labeling fundus images for classification models
 

Kürzlich hochgeladen

From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demoHarshalMandlekar2
 
What is Artificial Intelligence?????????
What is Artificial Intelligence?????????What is Artificial Intelligence?????????
What is Artificial Intelligence?????????blackmambaettijean
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rick Flair
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersRaghuram Pandurangan
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterMydbops
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 

Kürzlich hochgeladen (20)

From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demo
 
What is Artificial Intelligence?????????
What is Artificial Intelligence?????????What is Artificial Intelligence?????????
What is Artificial Intelligence?????????
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information Developers
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL Router
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 

RGB-D+RF-based sensing for human movement analysis

  • 3. 1-3xRealSenses+WiFiSensing For real-worlddeployment, oneRealSense probably enough, but additionalcameras probably can reduce someambiguitywhen gettingthe pilot data? ‘’’’’’’’’’’’’’’’’’’’’’’’’’ https://www.intel.com/content/dam/support/us/en/documents/em erging-technologies/intel-realsense-technology/RealSense_Multipl e_Camera_WhitePaper.pdf Can WiFi EstimatePersonPose? FeiWang, Stanislav Panev, Ziyi Dai, JinsongHan, DongHuang Xi’an JiaotongUniversityCarneige MellonUniversityZhejiangUniversity https://arxiv.org/abs/1904.00277 https://github.com/geekfeiw/WiSPPN WiFi Signals and Channel State Information (CSI) For the human sensing application, human body as an object, is able to make carrier change. In this paper, we aims to learn the mapping rule from the change to single person pose coordinates. We set WiFi working within a 20MHz band, the CSI of 30 carriers can be obtained through a open-source tool [Halperin et al. (2011) Cited by 585 ]. +
  • 8. GoogleCoralEdgeTPUBoard Vs NVIDIAJetsonNanoDevboard — Hardware Comparison https://towardsdatascience.com/google-coral-edge-tpu-board-vs-nvidia-jetson-nano-dev-board-hardware-comparison-31660a8bda88 Veryfewresultsarepresentfor theCoralEdgeTPUboardasit cannotrunpre-trainedmodelswhichwerenottrainedwith quantizationawaretraining.IntheaboveresultsJetsonused FP16precision.https://github.com/jolibrain/dd_performances https://www.phoronix.com/scan.php?page=article&item=nvi dia-jetson-nano&num=3 Manu Suryavansh: “ In my opinion the Coral Edge TPU dev board is better because of the below reasons — 1. The Coral dev board at $149 is slightly expensive than the Jetson Nano ($99) however it supports Wifi and Bluetooth whereas for the Jetson Nano one has to buy an external wifi dongle. 2. Additionally the NXP iMX8 SOC on the coral board includes a Video processing unit and a Vivante GC700 lite GPU which can be used for traditional image and video processing. It also has a Cortex-M4F low power micro- controller which can be used to talk to other sensors like temperature sensor, ambient light sensor etc. More sensors here — http://lightsensors.blogspot.com/2014/09/collection-of-various-sensors.html The Jetson also has Video encoder and decoder units. Additionally Jetson Nano has better support for other deep learning frameworks like Pytorch, MXNet. It also supports NVidia TensorRT accelerator library for FP16 inference and INT8 inference. Edge TPU board only supports 8-bit quantized Tensorflow lite models and you have to use quantization aware training.
  • 9. TPUs worknicelyforconvolutions Use TCNs instead of Recurrent models
  • 11. RGB+D RealSense or Kinect Azure basically your options
  • 12. RGB+D RealSenseor Kinect Azure basically your options https://www.intelrealsense.com/compare-depth-cameras/ $199.00 $179.00 Sensors 2018, 18(12),4413; https://doi.org/10.3390/s18124413 In this paper, we investigate the applicability of four different RGB-D sensors for this task. We conduct an outdoor experiment, measuring plant attribute in various distances and light conditions. Our results show that modern RGB-D sensors, in particular, the Intel D435 sensor, provides a viable tool for closerangephenotypingtasksinfields.
  • 13. RGB+D RealSenseor Kinect Azure basically your options Multi-CameraConfigurationforIntel®RealSense™D400Series DepthSensors https://www.intel.co.uk/content/www/uk/en/support/articles/000028140/emerging-technologies/intel-realsense-technology.html https://github.com/IntelRealSense/librealsense https://www.intel.com/content/dam/support/ us/en/documents/emerging-technologies/int el-realsense-technology/RealSense_Multiple_ Camera_WhitePaper.pdf
  • 15. Trigger Signal Generator for multi-camera setup? If only the RealSenses used, then one camera can be used as trigger https://www.intel.com/content/dam/support/us/en/documents/emerging-technologies/intel-realsense-technology/RealSense_Multiple_Camera_WhitePaper.pdf
  • 17. TowardsEnvironment Independent DeviceFree Human ActivityRecognition WenjunJuang et al. (2018) https://doi.org/10.1145/3241539.3241548 Driven by a wide range of real-world applications, significant efforts have recently been made to explore device-free human activity recognition techniques that utilize the information collected by various wireless infrastructures to infer human activities without the need for the monitoredsubjecttocarryadedicateddevice. In this paper, we propose EI, a deep-learning based device free activity recognition framework that can remove the environment and subject specific information contained in the activity data and extract environment/subject-independent features shared by the data collected on different subjects under different environments. We conduct extensive experiments on four different device free activity recognition testbeds: WiFi, ultrasound, 60 GHz mmWave, and visible light. The experimental results demonstrate the superior effectiveness and generalizability of the proposedEIframework.
  • 19. WenjunJuangetal.(2018) https://doi.org/10.1145/3241539.3241548 We aim to study the effect of human activities on ultrasound signals and evaluate the performance of the proposed system. To achieve the goal, we employ 12 volunteers (including both men and women) as the subjects to conduct the 6 different activities (wiping the whiteboard,walking,moving asuitcase,rotating the chair, sitting, as well as standing up and sitting down) that are shown in Fig. 4. The activity data are collected from 6 different rooms in two differentbuildings. Figure 7 shows the experiment setting in one of the rooms. The transmitter is an iPad on which an ultrasound generator app is installed, and it can emit an ultrasound signal of nearly 19 KHz. The receiver is a smartphone and we use the installed recorder app to collect the sound waves. The sound signal received by the receiver is a mixture of the sound waves traveling through the Line-of-Sight (LOS) and those reflected by the surrounding objects, including thehumanbodiesintheroom. Exampleof“Ultrasound”Setup Well if you want to call 19 kHz ultrasound UltrasonicDistance Sensor–HC-SR04 SEN-15569 $3.95 Single-sensor multispeakerlistening withacoustic metamaterials Yangbo Xieetal.PNAS (2015)  https://doi.org/10.1073/pnas.1502276112 Citedby39 -Relatedarticles
  • 20. Asurvey on acousticsensing Chao Cai, Rong Zheng, Menglan Hu (Submitted on 11 Jan 2019) https://arxiv.org/abs/1901.03450 In this paper, we present the first survey of recent advances in acoustic sensing using commodity hardware. We propose a general framework that categorizes main building blocks of acoustic sensing systems. This framework consists of three layers, i.e., the physical layer, processinglayer, andapplication layer. We highlight different sensing approaches in the processing layer and fundamental design considerations in the physical layer. Many existing and potential applications including context- aware applications, human-computer interface, and aerial acoustic communications are presented in depth. Challenges and future researchtrendsare also discussed.
  • 21. HandGestureRecognition BasedonActive UltrasonicSensingofSmartphone:A Survey ZhengjieWang et al. (August 2019) https://doi.org/10.1109/ACCESS.2019.2933987 This paper investigates the state-of-the-art hand gesture applications and presents a comprehensive survey on the characteristics of studies using the active sonar sensing system. Firstly, we review the existing research of hand gesture recognition based on acoustic signals. After that, we introduce the characteristics of ultrasonic signal and describe the fundamental principle of hand gesture recognition. Then, we focus on the typical methods used in these studies and present a detailed analysis on signal generation, feature extraction, preprocessing, and recognition methods. Next, we investigate the state-of-the-art ultrasonic-based applications of hand gesture recognition using smartphone and analyze them in detail from dynamic gesture recognition and hand tracking. Afterwards, we make a discussion about these systems from signal acquisition, signal processing, and performance evaluation to obtain some insight into development of the ultrasonic hand gesture recognition system. Finally, we conclude by discussing the challenges, insight, and open issues involvedinhandgesture recognitionbased onultrasonicsignalof the smartphone. Comparison of Systems, Including Adopted Signal, Extracted Signal, Sensors, Devices, Number of Devices, Additional Sensors, and Device-Free  channelimpulseresponse(CIR)
  • 23. ProjectSoliindepth:Howradar-detectedgestures couldsetthePixel4apart An experimental Google projectmayfinallybereadytomakeitswayintothe real world—and the implicationscouldbe enormous. https://www.computerworld.com/article/3402019/google-project-soli-pixel-4.html The Soli libraries extract real-time signals from radar hardware, outputting signal transformations, high precision position and motion data, and gesture labels and parameters at frame rates from 100 to 10,000 frames per second. The Soli sensor is a fully integrated, low-power radar operating in the 60-GHz ISM band. https://youtu.be/0QNiZfSsPc0
  • 24. WenjunJuangetal.(2018) https://doi.org/10.1145/3241539.3241548 individual antenna element phase shift and the relatively small number of antennaelements. For example, the main lobe in the beams generated by our hardware is 30-35 degrees. In Fig. 11, we illustrate the pattern of the beam we used (beam 12) in polar coordinates. Such imperfect beams often result in non-negligible multipath propagation (although still weaker than in WiFi). Thus, using only the physics laws it is very difficult to precisely model the complex ambient environments as well as the unique characteristics of different human subjects. Deep learning technique is an ideal solution for this problem duetoitssuperior featureextractionability ExampleofmmWaveSetup Searchlight:Trackingdevicemobilityusingindo orluminariestoadapt60 GHzbeams (2018)WepresentSearchLight,asystemthatenables adaptivesteeringofhighlydirectional60GHz beamsvia passivesensingofvisiblelightfromexistingillumination
  • 25. MilliBack:Real-TimePlug-n-Play Millimeter LevelTrackingUsingWireless Backscattering NingXiaoetal.(September 2019) https://doi.org/10.1145/3351270 Real-time handwriting tracking is important for many emerging applications such as Artificial Intelligence assisted education and healthcare. Existing movement tracking systems, including those based on vision, ultrasound or wireless technologies, fail to offer high tracking accuracy, no learning/training/calibration process, low trackinglatency,lowcostandeasytodeployatthesametime. In this work, we design and evaluate a wireless backscattering based handwriting tracking system, called MilliBack, that satisfies all these requirements. At the heart of MilliBack are two Phase Differential Iterative (PDI) schemes that can infer the position of the backscatter tag (which is attached to a writing tool) from the change in the signal phase. By adopting carefully-designed differential techniques in an iterative manner, we can take the diversity of devices out of the equation. The resulting position calculation has a linear complexity with the number ofsamples,ensuringfastandaccuratetracking. We have put together a MilliBack prototype and conducted comprehensive experiments. We show that our system can track various handwriting traces accurately, in some testings it achieve a median error of 4.9 mm. We can accurately track and reconstruct arbitrary writing/drawing trajectories such as equations, Chinese characters or just random shapes. We also show that MilliBack can support relatively high writing speed and smoothly adapt to the changes ofworkingenvironment.
  • 26. NewChip forMicrowaveImagingofBody September 30th , 2019 https://www.medgadget.com/2019/09/new-chi p-for-microwave-imaging-of-body.html Today’s clinicians are limited to a few imaging modalities, primarily X- ray, CT, MRI, and ultrasound. Microwaves, in principle, can also be used as a useful way to look inside the body. Microwave radiation is non-ionizing, so should be safer than X-rays, but in practice microwave imagers, because of the electronics inside, have remained bulky tabletop devices. Not only have they been impractical for imaging the body, the electronics inside conventional microwaveimagershavesufferedfrominterference. Now,researchersattheUniversityofPennsylvaniahavedevelopeda microwave imaging chip that replaces critical electronic components with optical ones, thereby allowing it to be much smallerandnotsufferfromasmuchinterference. The device is manufactured using now traditional semiconductor techniques resulting in a chip with over 1,000 photonic components, including waveguides and photodiodes. The device essentially works by converting microwave signals, that bounce back from the target, into optical ones. It then uses optical circuitry to process the data and generate an image of the target. It is only 2 millimeters on a side, so the components are microscopic. Since the chip is about the size of one in your smartphone, it can be integrated into small, potentially hand-held devices to image the heart,spotcancer cells,andevenlookinsidethebrain. Single-chipnanophotonicnear-fieldimager FarshidAshtiani,AngelinaRisi,andFiroozAflatouni OpticaVol.6,Issue10,pp.1255-1260(2019) https://doi.org/10.1364/OPTICA.6.001255 Here we introduce and demonstrate the first single-chip nanophotonic near- field imager, where the impinging microwave signals are upconverted to the optical domain and optically delayed and processed to form the near-field image of the target object. The 121-element imager, which is integrated on a silicon chip, is capable of simultaneous processing of ultra-wideband microwave signals and achieves 4.8° spatial resolution for near-field imaging with orders of magnitude smaller size than the benchtop implementationsandafractionofthepowerconsumption. Fig. 4. Near-field imaging results. (a) Imaging measurement setup. The object is at a distance of about 0.5 m from the receive antenna array. The transmitter radiates UWB pulses toward the object and the reflected signals are received and processed in the nanophotonic imager chip. The dimensions of the target objects and their distance to the imager are chosen to ensure that the entire object is within the imager field-of-view. The transmit antenna, the target object, and the receive antenna array are placed inside a shielded anechoic chamber. (b) Three target objects and their near-field images formed using the implemented nanophotonic imager are shown.
  • 28. WenjunJuangetal.(2018) https://doi.org/10.1145/3241539.3241548 Channel State Information. In this section, we make use of the Channel State Information (CSI) to analyze the effect of the human activities on the WiFi signal. CSI refers to known channel properties of a communication link in wireless communications. This information describes how a signal propagates from the transmitter to the receiver and represents the combined effect of, for example, scattering, fading, and power decay withdistance . Modern WiFi devices supporting IEEE 802.1n/ac 2.4 GHz / 5 GHz standards have multiple transmitting and receiving antennas, and thus can transmit data in MIMO (Multiple-Input Multiple- Output) mode. In an Orthogonal Frequency Division Multiplexing (OFDM) system, the channel between each pair of transmitting and receiving antennas consists of multiple subcarriers. We use the tool in Halperin et al. (2011) Citedby585 * to report CSI values of 30 OFDM subcarriers. Thus, the dimensionality of H is 30 × Nt × Nr . The reason why CSI can be used for recognizing human activities is mainly because it is easily affected by the presence of humans and their activities. Specifically, the human body may block the Line-of-Sight (LOS) path and attenuate the signal power. Additionally, the human body can introduce more signal reflections and change the number of propagation paths. Thus, the variance of CSI can reflect the human movements in the WiFi environments. * Our toolkit uses the Intel WiFi Link 5300 wireless NIC with 3 antennas. It works on up-to-date Linux operating systems: in our testbed we use Ubuntu 10.04 LTS with the 2.6.36 kernel https://dhalperi.github.io/linux-80211n-csitool/ ExampleofWifiSetup
  • 29. ChannelStateInformationfrom Pure CommunicationtoSenseandTrack HumanMotion:A Survey MohammedA.A.Al-qanessetal.(2019) https://doi.org/10.3390/s19153329 (This article belongs to the Section Intelligent Sensors) Recently, wireless signals have been utilized to track human motion and Human Activity Recognition (HAR) in indoor environments. The motion of an object in the test environment causes fluctuations and changes in the Wi- Fi signal reflections at the receiver, which result in variations in received signals. These fluctuations can be used to track object (i.e., a human) motion in indoor environments. This phenomenon can be improved and leveraged in the future to improve the internet of things (IoT) and smart home devices. The main Wi-Fi sensing methods can be broadly categorized as Received Signal Strength Indicator (RSSI), Wi-Fi radar (by using Software Defined Radio (SDR)) and Channel State Information (CSI). CSI and RSSI can be considered as device-free mechanisms because they do not require cumbersome installation, whereas the Wi-Fi radar mechanism requires special devices (i.e., UniversalSoftwareRadioPeripheral(USRP)). Recent studies demonstrate that CSI outperforms RSSI in sensing accuracy due to its stability and rich information. This paper presents a comprehensive survey of recent advances in the CSI-based sensing mechanism and illustrates the drawbacks, discusses challenges, and presents some suggestions for the future of device-free sensingtechnology. Hybrid sensing methods. As already discussed in previous sections, different techniques have different limitations; body sensors attached to the user’s body may be used to solve some limitations of current Wi-Fi sensing systems. Therefore, combining body sensors or smartphones with device-free Wi- Fi-based methods into hybrid sensing technologies needs to be addressed in future work. The first simple attempt to combine CSI and wearable devices was presented in Fangetal.2016 BodyScan Cited by 44 . Moreover, CSI can play an important role in the IoT; therefore, hybrid methods to apply CSI in multimedia communications and IoT applications can be addressed. Furthermore, Wireless Sensor Network (WSN)schemescanbestudied.
  • 30. BodyScan:Enablingradio-based sensingonwearabledevicesfor contactlessactivityandvitalsign monitoring BiyiFang‡, Nicholas D. Lane†∗, MiZhang‡, Aidan Boran†, Fahim Kawsar† ‡Michigan State University, †Bell Labs, University CollegeLondon∗ MobiSys'16 Proceedingsofthe 14th Annual International Conference on MobileSystems, Applications, and Services https://doi.org/10.1145/2906388.2906411  Citedby 44 For these reasons, we expect radio-based sensing to play an important role in the future evolution of wearable devices, and hope the design and techniques of BodyScan can act as a useful foundation for the subsequent investigations.
  • 31. JointActivity Recognitionand Indoor LocalizationWithWiFi Fingerprints FeiWang ;JianweiFeng;YinliangZhao;Xiaobin Zhang;Shiyuan Zhang;JinsongHan IEEEAccess( Volume:7,June2019) https://doi.org/10.1109/ACCESS.2019.2923743 https://github.com/geekfeiw/apl Recent years have witnessed the rapid development in the research topic of WiFi sensing that automatically senses human with commercial WiFi devices. Past work fallsintotwomajor categories,i.e., activity recognition and theindoor localization. The key rationale behind WiFi sensing is that people behaviors can influence the WiFi signal propagation and introduce specific patterns into WiFi signals, called WiFi fingerprints, which can be further exploredtoidentifyhumanactivitiesandlocations. In this paper, we propose a novel deep learning framework for joint activity recognition and indoor localization task using WiFi channel state information (CSI) fingerprints. More precisely, we develop a system running standard IEEE 802.11n WiFi protocol and collect more than 1400 CSI fingerprints on 6 activities at 16 indoor locations. Then we propose a dual-task convolutional neural network with one- dimensional convolutional layers for the joint task of activity recognition and indoor localization. The experimental results and ablation study show that our approach achieves good performances in this joint WiFi sensingtask. As shown in FIGURE 1, the first two figures are the top view and front view of the universal software radio peripheral (USRP;EtussN201,£1,710),respectively.TheUSRP ismainlycomposedofamother board,adaughter boardanda WiFi antenna, which is used to broadcast or receive WiFi signals under the control of GNU Radio (https://www.gnuradio.org/) . The detailsare listedbelow.Meanwhile,the assembling diagram isshown inFIGURE 2. 1. EtussN210s:A hardware withfield programmable gate array (FPGA) that can be embedded IEEE 802.11n protocol to sendand receive WiFi packagesfor CSI fingerprints. 2. Etuss Clock(https://www.ettus.com/all-products/OctoClock-G/, £1,680) and synchronization cables: Synchronizing N210s with GPS clock to avoid a WiFi phase shifting caused by the clock differences between two N210s. 3. Antennas: To broadcast or receive WiFi signals under the control ofGNU Radio (https://www.gnuradio.org/) 4. Computers and Ethernet cables: TocontrolN210swhenaresetinasamelocalareanetworkasN210s.
  • 32. EvaluatingIndoorLocalization Performanceonan IEEE802.11ac Explicit-Feedback-BasedCSI Learning System TakeruFukushima;Tomoki Murakami;HiranthaAbeysekera;Shunsuke Saruwatari;TakashiWatanabe 2019IEEE 89th Vehicular TechnologyConference (VTC2019-Spring) https://doi.org/10.1109/VTCSpring.2019.8746628 There is a demand for device-free user location estimation with high accuracy in order to realize various indoor applications. This paper proposes an IEEE 802.11ac explicit feedback- based channel state information (CSI) learning system which can be used for device- free user location estimation. The proposed CSI learning system captures CSI feedback from off-the-shelf Wi-Fi devices and extracts 624 features from a CSI feedback frame definedinIEEE 802.11ac. We evaluated the proposed system using location estimationwith six patterns: differentcombinations of device-free user movement and access point antenna orientation. The evaluation results show that the machine learning based localization achieves approximately 96% accuracy for seven positions of the user, and the divergence of CSI improves localization performance. The finding is interesting: the divergence of CSI improves machine learning performance. Previous studies such as PhaseFi [14] and PADS [15] have used linear transformation for localization, and described that stable phases improve localization accuracy
  • 33. LowHuman-Effort,Device-Free LocalizationwithFine-Grained SubcarrierInformation Ju Wanget al. (2018) IEEE Transactionson Mobile Computing( Volume:17 , Issue:11, Nov. 1 2018 ) https://doi.org/10.1109/TMC.2018.2812746 Device-free localization of objects not equipped with RF radios is playing a critical role in many applications. This paper presents LIFS, a Low human-effort, device-free localization system with fine-grained subcarrier information, which can localize a target accurately without offline training. The basic idea is simple: channel state information (CSI) is sensitive to a target's location and thus the target can be localized by modelling the CSI measurements of multiple wireless links. However, due to rich multipath indoors, CSI can not be easily modelled. Todealwiththischallenge,our keyobservationisthat even in a rich multipath environment, not all subcarriers are affected equally by multipath reflections. Our CSI pre-processing scheme triesto identify the subcarriers not affected by multipath. Thus, CSI on the “clean” subcarriers can still be utilized for accurate localization. Without the need of knowing the majority transceivers' locations, LiFS achieves a median accuracy of 0.5 m and 1.1 m in line-of-sight (LoS) and non-line-of-sight (NLoS) scenarios, respectively, outperforming the state- of-the-artsystems. We design, implement and evaluate LiFS against the existing Pilot, RASS and RTI systems. Real-world experiments demonstrate that LiFS outperforms the three state-of-the-artsystems.
  • 34. WiFiSensingwithChannelState Information:ASurvey Yongsen Ma,GangZhou,Shuangquan Wang ACMComputingSurveys(CSUR)Surveys Volume52Issue3,July2019 https://doi.org/10.1145/3310194 Different WiFi sensing algorithms and signal processing techniques have their own advantages and limitations and are suitable for different WiFi sensing applications. The survey groups CSI-based WiFi sensing applications into three categories, detection, recognition, and estimation, depending on whether the outputs are binary/multi-class classifications or numerical values. With the development and deployment of new WiFi technologies, there will be more WiFi sensing opportunities wherein the targets may go beyond from humansto environments, animals, andobjects. The survey highlights three challenges for WiFi sensing: robustness and generalization, privacy and security, and coexistence of WiFi sensing and networking. Finally, the survey presents three future WiFi sensing trends, i.e., integrating cross-layer network information, multi- device cooperation, and fusion of different sensors, for enhancing existing WiFi sensing capabilities and enabling new WiFi sensingopportunities. [1] Heba Abdelnasser, Khaled A. Harras, and Moustafa Youssef. 2015. UbiBreathe: A Ubiquitous non-invasive WiFi-based breathing estimator. In Proceedings of the 16th ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc’15). 277–286. https://doi.org/10.1145/2746285.2755969 [56] Jian Liu, Yan Wang, Yingying Chen, Jie Yang, Xu Chen, and Jerry Cheng. 2015. Tracking vital signs during sleep leveraging off-the-shelf WiFi. In Proceedings of the 16th ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc’15). 267–276. https://doi.org/10.1145/2746285.2746303 [100] Xuyu Wang, Chao Yang, and Shiwen Mao. 2017. PhaseBeat: Exploiting CSI phase data for vital sign monitoring with commodity WiFi devices. In Proceedings of the 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS’17). 1230– 1239. https://doi.org/10.1109/ICDCS.2017.206 [120] Fu Xiao, Jing Chen, Xiao Hui Xie, Linqing Gui, Juan Li Sun, and Wang Ruchuan. 2018. SEARE: A system for exercise activity recognition and quality evaluation based on green sensing. IEEE Trans. Emerg. Top. Comput. (Early Access) (2018). https://doi.org/10.1109/TETC.2018.2790080 New WiFi sensing algorithms are also required to take full advantage of multi-domain information with time, spatial, and user dependence. New coordination algorithms are necessary for extracting useful information from different domains. Since CSI has some unique properties such as low spatial resolution and sensitive to environmental changes, it is crucial for WiFi sensing algorithms to be robust in different scenarios. Most existing deep learning solutions of WiFi sensing reuse DNNs for images and videos. It is necessary to find suitable DNN types and develop new DNNs specifically designed for CSI data. For cross-sensor WiFi sensing, pre-trained DNNs for other sensors can be used for automatic labeling of CSI data. Transfer learning, teacher–student network training, and reinforcement learning can also be used to reduce network training efforts. WiFi sensing is very easy to be used for malicious purposes, since WiFi signals can be passively transmitted through walls and are not limited to lighting conditions. Generative Adversarial Networks (GANs) can be used to generate fake WiFi signal patterns to prevent from malicious WiFi sensing
  • 35. WiFi- Sensing for other vital signals or whetherthey are artifacts to actual movement analysis?
  • 40. Somecommercialoptionsexist noneed to build anything. Theydonot reallymeasure sleep well, butmaybeok for respiration+RR intervals? https://doi.org/10.1007/978-3-319-78759-6_32 Novel.http://www.novel.de/novelcontent/pliance/wheelchair Tekscan. https://www.tekscan.com/ PressureProfile.https://pressureprofile.com/ Texisense.http://www.texisense.com/capteur_en SensorsProducts. http://www.sensorprod.com/dynamic/mattress.php Emfit. http://www.safetysystemsdistribution.co.uk/bed-exit-alarms/ Emfit. https://www.emfitqs.com/ Murata.https://www.murata.com/ EarlySense. https://www.earlysense.com/
  • 41. Foranalogy see PLR*Pupillary Light Reflex Youseeheartrate andrespirationin pupilsize https://www.slideshare.net/PetteriTeikariPhD /hyperspectral-retinal-imaging
  • 42. If youwantedtoseenonlinear dynamics youmay wantto try to separateANSfromotherpathologicalprocesses? Nonlinearanalysisofpupillarydynamics  https://www.researchgate.net/publication/28091 9208_Nonlinear_analysis_of_pupillary_dynamics accessedJul132018 "Our results suggest that the pupillary dynamics are modulated at different time scales by processes and/or systems in a similar way as the cardiovascular system, confirming the hypothesis of a similar autonomic modulation for different physiological systems [ Calcagniniet al. 2000]. These results are also consistent with Yoshida et al. 1995, who reported for Spontaneous Fluctuations of Pupil Dilation (SFPD) an f-1 spectral characteristicatfrequencies f< 0.2 Hz" Correlationsbetween the autonomicmodulationof heart rate, blood pressure and the pupillary light reflex inhealthy subjects Bär et al. (2009) https://doi.org/10.1016/j.jns.2009.01.010 The interaction between pupilfunction and cardiovascular regulation in patientswith acute schizophrenia Bär etal. (2008) https://doi.org/10.1016/j.clinph.2008.06.012 Infrared Camera-Based Non-contact Measurement of Brain Activity From PupillaryRhythms Parkand Whang(2018) https://doi.org/10.3389/fphys.2018.01400 Respiratory fluctuations inpupil size by PBorgdorff -1975 https://www.ncbi.nlm.nih.gov/pubmed/1130509 -Citedby75 Preliminaryinvestigationofpupilsize variability:towardnon-contact assessmentofcardiovascular variability HungandZhang (2006) https://doi.org/10.1109/ISSMDBS.2006.360118 Recent researches have discovered the presence of heart rate variability (HRV) and blood pressure variability (BPV) frequency components in pupil size variability (PSV). Aims of this study are to investigate effect of physical exercise on PSVspectrum, and to address the feasibility of PSV monitoring. Electrocardiogram, respiration effort, finger arterial pressure, and pupil images were recorded from ten subjects before and after exercise. Normalized high frequency (0.15-0.5 Hz) power of PSV in seven subjects and total (0.04-0.5 Hz) power of PSV in nine subjects decreased after exercise, followed by an increase duringrecovery. The patterns of changes are generally paralleled by corresponding indexes from spectra of HRV and BPV. Preliminary results suggest that PSV has the potential to be a novel physiological indicator used in patient-monitoring. Non-contactmeasurementof heart responsereflectedinhumaneye Park et al. (2018) https://doi.org/10.1016/j.ijpsycho.2017.07.014 Doesa change inirisdiameter indicate heart rate variability? Kocet al. (2018) https://doi.org/10.1016/j.jns.2009.01.010 PupilScreen:UsingSmartphones to Assess TraumaticBrainInjury https://atm15.github.io/publications/pupilscreen/
  • 43. ThenonlinearanalysisofPLR wellit has always been sort of aniche Spontaneous oscillationsinanonlineardelayed-feedback shuntingmodelofthe pupillightreflex P.C.BressloffandC.V.WoodPhys.Rev.E 58,3597–Published1September 1998 https://doi.org/10.1103/PhysRevE.58.3597 Hence, the experimentally motivated second-order delay equation presented in this paper accounts for certain discrepancies of the previous first-order model both in open-loop and closed-loop configurations. Furthermore, it allows one to investigate the dependence of pupil light reflex dynamics on various neurophysiologically important parameters of the system such as the effective strength of neural connections and the time delay. These parameters vary from patient to patient and extreme values can be an indicatorofapathology. Future work will investigate the effects of noise arising from the neural components of the reflex arc, as well as details concerning photoreceptor dynamics. In particular, the important role that photoreceptors play in light adaptation will be investigated and contrasted with possible neural mechanisms foradaptation.  In conclusion, the pupil light reflex is an important paradigm for nonlinear feedback control systems. Understanding the behavior of such systems involves important mathematical questions concerning the properties of differential equations with delays and noise, and could benefit the clinician interested in developing diagnostic tests for detecting neurological disorders. It is also hoped that the work will have applicationsin other areassuchas respiratory and cardiac control. Pupil unrest:an example of noise in abiolo gical servomechanism L Stark, FW Campbell, J Atwood - Nature, 1958  Cited by130 Noiseand criticalbehaviorofthepupi llight reflex at oscillationonset A Longtin, JGMilton, JE Bos, MC Mackey - Physical Review A, 1990  Cited by178 Spontaneous oscillationsin a nonlinearde layed-feedbackshunting modelof the pup illight reflex PCBressloff, CVWood - Physical reviewE, 1998  Cited by20 Nonlinear dynamicsin physiologyand me dicine A Beuter, L Glass, MCMackey, MSTitcombe- 2003  Cited by 145 How dospontaneouspupillary oscillations in light relate to light intensity? M Warga, H Lüdtke, H Wilhelm, B Wilhelm -Vision research, 2009- Elsevier Cited by 33
  • 45. EEGExample Artifactsandnoiseremovalfor electroencephalogram(EEG): A literaturereview ChiQinLaietal.(2018) https://doi.org/10.1109/ISCAIE.2018.8405493 Raw EEG may be contaminated with unwanted components such as noises and artifacts caused by power source, environment, eye blinks, heart rate and muscle movements, which are unavoidable. These unwanted components will effect the analysis of EEG and provide inaccurate information. Therefore, researchers have proposed all kind of approaches to eliminate unwanted noises and artifacts from EEG. In this paper, a literature review is carried out to study the works that have been done for noise and artifact removal from year 2010 up to the present. It is found that conventional approaches include ICA, wavelet based analysis, statistical analysis and others. However, the existing ways of artifacts removal cannot eliminate certain noise and will cause information lost by directly discard the contaminated components. From the study, it is shown that combination of conventional with other methods is popularly used, as it is able to improve the removal of artifacts. The current trend of artifacts removal makes use of machine learning to provide an automated solution withhigher efficiency. Useindependent component analysis(ICA)toremoveECGartifacts http://www.fieldtriptoolbox.org/example/use_independent_component_analysis_ica_to_remove_ecg_artifacts/ EEGartifactremovalwithBlindSourceSeparation https://sccn.ucsd.edu/~jung/Site/EEG_artifact_removal.html
  • 46. SimultaneousEEG-fMRIExample BallistocardiogramArtifactReductionin SimultaneousEEG-fMRIusing Deep Learning JamesR.McIntoshetal.(15Oct2019) https://doi.org/10.1109/ISCAIE.2018.8405493 The concurrent recording of electroencephalography (EEG) and functional magnetic resonance imaging(fMRI) isa technique that has received much attention due to its potential for combined high temporal and spatial resolution. However, the ballistocardiogram (BCG), a large-amplitude artifact caused by cardiac induced movement contaminates the EEG during EEG-fMRI recordings. In this paper, we present a novel method for BCG artifact suppression using recurrent neural networks (RNNs). One general difficulty in assessing BCG suppression quality is that the ground truth BCG signal is unknown. Unlike other current methods, the core of our method is designed to directly generate BCG from ECG. It is therefore possible to see how BCGNet might be used to simulate ground truth BCG signal, which can be augmented with simulated 1/f noise and brain derived sources in order to study the relative efficacy of other BCG suppression techniques. Furthermore, via extensions of BCGNet that either model the propagation of the ECG signal itself, or by directly injecting signal into the small central dense layer of the network, it may be possible to gain fine grained control of the BCG construction under test, for example, to study the different methodsunder changingheart-rateconditions
  • 48. ‘CocktailPartyProblem’ hereandeverywhere Variational Autoencodersand Nonlinear ICA:A UnifyingFramework IlyesKhemakhemGatsbyComputationalNeuroscienceUnit,UCL; DiederikP.Kingma,GoogleBrain; AapoHyvärinen,INRIA-Saclay,Deptof CS, UniversityofHelsinki (10Jul2019) https://arxiv.org/abs/1907.04809 The advantage of the new framework over typical deep latent-variable models used with VAEs is that we actually recover the original latents, thus providing principled "disentanglement". On the other hand, the advantages of this algorithm for solving nonlinear ICA over are several; briefly, we obtain the likelihood and can use MLE, we learn a forward model as well and can generate new data, and we consider the more general cases of noisy data with fewer components, and even discretedata. Independent component analysis: algorithms and applications A Hyvärinen, E Oja Neural networks 2000 13 (4-5), 411-430 cited by Cited by 18,114 articles Unsupervised Feature Extraction byTime- Contrastive Learningand NonlinearICA AapoHyvärinen, Hiroshi Morioka Published inNIPS2016 https://arxiv.org/abs/1605.06336 - Citedby45 
  • 52. PyroelectricIR(PIR)Sensing Sametechaslow-costoccupancysensingforlightingforexample(PIRMotionSensor) EnablingCognitivePyroelectricInfraredSensing: FromReconfigurableSignalConditioningtoSensor MaskDesign RuiMa;JiaqiG1ong;GuochengLiu;QiHao IEEETransactionsonIndustrialInformatics (30Sept2019) https://doi.org/10.1109/TII.2019.2944700 Poor signal-to-noise ratios (SNRs) and low spatial resolutions have impeded low-cost pyroelectric infrared (PIR) sensors from many intelligent applications for thermal target detection/recognition. This paper presents a Cognitive Signal Conditioning & Modulation Learning (CSCML) framework for PIR sensing with two innovations to solve these problems: 1) a reconfigurable signal conditioning circuit design to achieve high SNRs; 2) an optimal sensor mask design to achieve high recognition performance. By using a Programmable System on Chip (PSoC), the PIR signal amplifier gain and filter bandwidth can be adjusted automatically according to working conditions. Based on the modeling between PIR physics and thermal images, sensor masks can be optimized through training Convolution Neural Networks (CNNs) with large thermal image datasets for feature extraction of specific thermal targets. The experimental results verify the improved performance of PIR sensors in variousworking conditions and applications by using the developed reconfigurable circuit and application-specificmasks.
  • 53. Ground Truth? fromVicon Motion Capture, or laser scanner Youwantto knowifyour measurements and modeloutput are actually accurate and useful
  • 54. Ideafordatasetcreationpipelinerequirements Wearables,BiomechanicalFeedback,and HumanMotor-Skills’Learning&Optimization XiangZhang,GongbingShan,YeWang,BingjunWan andHuaLi Appl.Sci.2019,9(2),226; https://doi.org/10.3390/app9020226 It is well known that, among all human physical activities, sports and arts skills exhibit the most diversity of motor control. The datasets that are available for developing deep learning models have to reflect the diversity, because the depth and specialization must come from training the deep learning algorithms with the massive and diverse data collected from sports and arts motor skills. Therefore,at present,the vitalstep for developingreal- time biomechanical feedback tool is to simultaneously collect alargeamountofmotiondata using both 3D motion capture (e.g., the two-chain model with ~40 markers) and wearable IMUs (e.g., the samemodelwithsixIMUs). The datasets should cover large variety of sports skills and arts performances. As such, the 3D motion-capture data can be served as a “supervisor” for training network model to map IMUs data to joints’ kinematic data. Such a deep learning model could be universally applied in motor learning and thetraining ofsportsandartsskills. Machineand deeplearningforsport-specificmovementrecognition:asystematicreview of modeldevelopmentandperformance AnargyrosWilliamMcNally,Alexander Wong,John McPheehttps://doi.org/10.1080/02640414.2018.1521769
  • 55. Multimodal MeasurementRigforactionrecognition They recordedhuman activitieswithaRGB360deg,Lidar andRGB-Datthesame time.Nicetherig,otherwisemaybenotsonice.Thismultimodaldataset isthefirstof itskindto bemadeopenlyavailableandcanbeexploited formanyapplicationsthatrequireHAR,includingsports analytics,healthcareassistanceandindoorintelligentmobility. https://arxiv.org/pdf/1901.02858.pdf Multimodalroomforteasingoutwhatmodalitiesmatter?
  • 56. SensorDataAcquisitionandMultimodal SensorFusionforHumanActivityRecognition UsingDeepLearning Sensors2019,19(7),1716; SW·ContentsBasicTechnologyResearchGroup,Electronicsand TelecommunicationsResearchInstitute,Daejeon https://doi.org/10.3390/s19071716 We adopt a two-level ensemble model to combine class-probabilities of multiple sensor modalities, and demonstrate that a classifier-level sensor fusion technique can improve the classification performance. By analyzing the accuracy of each sensor on different types of activity, we elaborate custom weights for multimodal sensor fusion that reflect the characteristic of individual activities AccuracyofWiFimightbeenoughforsometasks,whileforothersyouneed IntelRealSense(orsomebedsensor) DeepLearning forMusculoskeletal ForcePrediction https://doi.org/10.1007/s10439-018-02190-0 Departmentof Bioengineering,ImperialCollegeLondon, LondonUK "The dataset comprised synchronously captured kinematic (lower limb marker trajectories obtained byoptoelectronic capture—Vicon MX system, Vicon Motion Systems Ltd, Oxford, UK), force plate (ground reaction force and centre of pressure—Kistler Instrumente AG, Winterthur, Switzerland) and EMG (Trigno Wireless EMG system, Delsys, USA) data from 156 subjects during multiple trials of level walking" Softrobotperceptionusing embeddedsoftsensorsand recurrentneuralnetworks ThomasGeorgeThuruthel, BenjaminShih,CeciliaLaschiand MichaelThomasTolleyScience Robotics 30Jan2019:Vol.4,Issue 26,eaav1488 DOI:10.1126/scirobotics.aav1488
  • 57. Low-costOptiTrack vs. High-EndVicon https://github.com/motionlab-mogi-bme/Applicatio n-of-OptiTrack-motion-capture-systems-in-human -movement-analysis Anovelvalidationandcalibrationmethodfor motioncapturesystemsbasedonmicro- triangulationGergelyNagymáté,TamásTuchband, RitaM.Kiss Motion AnalysisLaboratoryoftheDepartmentof Mechatronics,Opticsand MechanicalEngineeringInformaticsattheBudapestUniversityofTechnologyandEconomicsin Hungary https://doi.org/10.1016/j.jbiomech.2018.04.009 Our study aimed to analyse the absolute volume accuracy of optical motion capture systems by means of engineering surveying reference measurement of the marker coordinates (uncertainty: 0.75mm). The method is exemplified on an 18 camera OptiTrack Flex13 motion capture system. The absolute accuracy was defined by the root mean square error (RMSE) between the coordinates measured by the camera system and by engineering surveying(micro-triangulation). A simply feasible but less accurate absolute accuracy compensation method using tape measure on large distances was also tested, which resulted in similar scaling compensation compared to the surveying method or direct wand size compensation by a high precision 3D laser scanner [Leica TS15i 1" total stations (angular accuracy: 1”); ATOS II Triple Scan MV320].
  • 58. Low-costOptiTrack vs. High-EndVicon https://doi.org/10.1016/j.jbiomech.2018.04.009 (2018): “The use of low cost optical motion capture (OMC) multi- camera systems is spreading in the fields of biomechanics research (Hicheur etal.,2016) and rehabilitation(Chungetal.,2016). Summary of accuracy evaluation studies Different OMC systems are sometimes validated using Vicon camera systems (Vicon Motion Systems Ltd, Oxford, UK), which are regarded as the gold standard in scientific applications(Eharaetal.,1997,1995). https://doi.org/10.1016/0966-6362(95)99067-U https://doi.org/10.1016/S0966-6362(96)01093-4 The accuracy and processing time of 11 commercially available 3D camera systems were tested to evaluate their performance in clinical gait evaluation. The systems tested were Ariel APAS, Dynas 3D/h, Elite Plus, ExpertVision, PEAK5, PRIMAS, Quick MAG, VICON 140, VICON 370, color Video Locus andreflectiveVideoLocus. Accuracy and processing time of commercially available 3D camera systems for clinical gait measurement were measured. Tested systems were: Quick MAG, Video Locus, Peak 5, Ariel, Vicon 370, Elite, Kinemetrix 3D, and Optotrack 3020
  • 59. AffordableOpticalMotionCapture vs.Vicon“GroundTruth” Affordableclinicalgaitanalysis:Anassessmentof themarkertrackingaccuracyof anewlow-cost optical3Dmotionanalysissystem BruceCarse,BarryMeadows,RoyBowers,PhilipRowe(2013) https://doi.org/10.1016/j.physio.2013.03.001 Citedby88 -Relatedarticles Arigidcluster offour reflectivemarkerswasusedtocomparea low-cost Optitrack 3D motion analysis system against two more expensive systems (Vicon 612 and Vicon MX). Accuracy was measured by comparing the mean vector magnitudes (between each combination of markers) for each system. There are a number of shortcomings of optical 3D motion analysis systems; cost of equipment, time required and expertise to interpret results. While it does not address all of these problems, the Optitrack system provides a low-cost solution that can accurately track marker trajectories to a level comparable with an older and widely used higher cost system (Vicon 612). While it cannot be considered to be a complete clinical motion analysis solution, it does represent a positive step towards making 3DGA more accessible to wider researchandclinicalaudiences. Next-GenerationLow-CostMotionCaptureSystemsCanProvideComparableSpatial AccuracytoHigh-EndSystems DominicThewlis,ChrisBishop,NathanDaniell,GuntherPaule(2013) https://doi.org/10.1123/jab.29.1.112 Citedby49 -Relatedarticles We assessed static linear accuracy, dynamic linear accuracy and compared gait kinematics from a Vicon MX-f20 system to a Natural Point OptiTrack system. In all experiments data were sampled simultaneously. We identified both systems perform excellently in linear accuracy tests with absolute errors not exceeding 1%. In gait data there was again strong agreement between the two systems in sagittal and coronal plane kinematics. Transverse plane kinematics differed by up to 3° at the knee and hip, which we attributed to the impact of soft tissue artifact accelerations on the data. We suggest that low-cost systems are comparably accurate to their high-end competitors and offer a platform with accuracy acceptable in research for laboratories with a limitedbudget. Further work is required to explore the absolute angular accuracy of the systems and their susceptibility to high accelerations associated with soft tissue artifact; however, it is likely that differences of this magnitude might be evident between competing high-end solutions. We must also begin to explore analog integration or synchronization with low-cost systems, as inaccuracies here could impact significantly when calculating jointmomentsand powersusing inversedynamics
  • 60. IMUsvs. Goniometer groundtruth Predictivetrajectoryestimationduringrehabilitativetasksin augmentedrealityusinginertialsensors ChristopherL.Hunt;AvinashSharma;LukeE.Osborn;RahulR.Kaliki; NitishV.Thakor DepartmentofBiomedicalEngineering, Johns Hopkins University / Infinite Biomedical Technologies 2018 IEEE Biomedical Circuits and SystemsConference (BioCAS) https://doi.org/10.1109/BIOCAS.2018.8584805 This paper presents a wireless kinematic tracking framework used for biomechanical analysis during rehabilitative tasks in augmented and virtual reality. The framework uses low-cost inertial measurement units and exploits the rigid connections of the human skeletal system to provide egocentric position estimates of joints to centimeter accuracy. On-board sensor fusion combines information from three-axis accelerometers, gyroscopes,andmagnetometerstoproviderobustestimatesinreal-time. Sensor precision and accuracy were validated using the root mean square error of estimated joint angles against ground truth goniometer high- precision stepper motor with a 0.9◦step size (NEMA, Rosslyn, VA) measurements. The sensor network produced a mean estimate accuracy of 2.81° with 1.06° precision,resultinginamaximumhandtrackingerrorof 7.06cm. As an application, the network is used to collect kinematic information from an unconstrained object manipulation task in augmented reality, from which dynamic movement primitives are extracted to characterize natural task completion in N = 3 able-bodied human subjects. These primitives are then leveraged for trajectory estimation in both a generalized and a subject- specific scheme resulting in 0.187 cm and 0.161 cm regression accuracy, respectively. Our proposed kinematic tracking network is wireless,accurate,and especiallyusefulfor predicting voluntaryactuation in virtualandaugmentedrealityapplications. An overview of a rehabilitation session. (A) The individual uses an augmented reality headset to receive kinematic tasks to complete. Tasks consist of transporting an object to and from different quadrants while possibly changing its orientation. Sensorized tracking nodes {nRF51822 microcontroller (Nordic Semiconductor via RedBearLab) with MPU9250 9-axis IMU with Mahony complementary filter [protocol Nordic Enhanced ShockBurst]} are rigidly affixed to the anatomical landmarks and are used to record multijoint trajectories for primitive construction. (B) Once computed, these primitives are used to predict natural, user-specific hand trajectories in subsequent tasks. These predicted trajectories can then be rendered by the headset to serveas anoptimalreferencefortheuser.
  • 61. GoldStandardBenchmarking IMU vs. OpticalCapture Asensor-to-segmentcalibrationmethodformotion capturesystembasedonlowcostMIMU NamcholChoe,HongyuZhao,SenQiu,YonggukSo MeasurementVolume131,January2019,Pages490-500 https://doi.org/10.1016/j.measurement.2018.07.078 A sensor-to-segment calibration method for motion capture system is proposed. Calibration principle, procedure and program are listed. Positions of the magnetometer correction are determined. Influence of the magnetic and inertial measurement units (MIMU) mounting position is evaluated. Effectiveness of the proposed method is validatedbyopticaldevice (NDIPolarisSpectraSystem).  Coordinate systemsin body and vectors of body segments. (a) Body local coordinate system (BLCS) and body segment coordinate system (BSCS), (b) Vectorsof bodysegments. Asensorfusionapproachforinertialsensorsbased3Dkinematicsand pathologicalgaitassessments:towardanadaptivecontrolof stimulationin post-strokesubjects B.Sijobert;F.Feuvrier;J.Froger;D.Guiraud;C.AzevedoCoste https://doi.org/10.1109/EMBC.2018.8512985(2018) Pathological gait assessment and assistive control based on functional electrical stimulation (FES) in post-stroke individuals, brings out a common need to robustly quantify kinematics facing multiple constraints. This study proposes a novel approach using inertial sensors to compute dorsiflexion angles and spatio-temporal parameters, in order to be later used as inputs for online close-loop control of FES. 26 post-stroke subjects were asked to walk on a pressure mat equipped with inertial measurement units (IMU) and passive reflective markers. A total of 930 strides were individually analyzed and results between IMU-based algorithms and reference systems compared. Mean absolute (MA) errors of dorsiflexion angles were found to be less than 4°, while stride lengths were robustly segmented and estimated with a MA error less than 10 cm. These results open new doors to rehabilitation using adaptiveFESclosed-loopcontrolstrategies in “footdrop”syndromecorrection.
  • 62. Soft-tissue Artifact(STA) human body toosoftasmetrological platform if you start throwing IMUs to the body Quantificationofsofttissueartifactinlowerlimb humanmotionanalysis:Asystematicreview AlanaPeters,Brook Galna,MorganSangeux,MegMorris, RichardBakerGait& PostureVolume 31, Issue 1, January2010, Pages1-8 https://doi.org/10.1016/j.gaitpost.2009.09.004 Citedby221 -Relatedarticles Conflict of interest A/Prof Richard Baker and Dr Morgan Sangeux receive research fundingfrom Vicon (Oxford, UK). ASimpleAlgorithmforAssimilatingMarker-BasedMotionCaptureData DuringPeriodicHumanMovementIntoModelsofMulti-Rigid-Body SystemsYasuyukiSuzuki,TakuyaInoue,andTaishinNomura FrontBioengBiotechnol.2018;6: 141.Publishedonline2018Oct18.  doi: 10.3389/fbioe.2018.00141 Here we propose a simple algorithm for assimilating motion capture data during periodic human movements, such as bipedal walking, into models of multi-rigid- body systems in a way that the assimilated motions are not affected by STA. The proposed algorithm assumes that STA time-profiles during periodic movements are also periodic. We then express unknown STA profiles using Fourier series, and show that the Fourier coefficients can be determined optimally based solely on the periodicity assumption for the STA and kinematic constraints requiring that any two adjacent rigid-links are connected by a rotary joint, leading to the STA-freeassimilatedmotionthatisconsistentwiththemulti-rigid-link model. Rigid seven-link model of human walking. (A) Positions of landmarks and rigid seven-link model of human body. Rigid seven-link model consists of Head-Arm-Trunk link (HAT), left and right Thigh links (l/r-T), left and right Shank links (l/r-S), and left and right Foot links (l/r-F). Blue circles represent landmarks of each link, and each landmark correspondstoanatomicallandmarkofhumanbody
  • 63. Jointkinematicsestimationusingamulti-bodykinematicsoptimisation andanextendedKalmanfilter,andembeddingasofttissueartefact modelVincentBonnetetal.-Citedby7 -Relatedarticles JournalofBiomechanicsVolume62,6September 2017,Pages148-1558 https://doi.org/10.1016/j.jbiomech.2017.04.033 To reduce the impact of the soft tissue artefact (STA) on the estimate of skeletal movement using stereophotogrammetric and skin-marker data, multi-body kinematics optimisation(MKO) and extendedKalmanfilters (EKF) have been proposed.  Embedding the STA model in MKO and EKF reduced the average RMSof markertracking from 12.6to1.6mm andfrom 4.3to1.9mm, respectively,showingthataSTAmodeltrial-specificcalibrationisfeasible. You could look now all the literature on spatio-temporal tracking (pedestrians, sports, autonomous driving, GPS trajectory, etc.) to constrain the possible movementofIMU units https://scholar.google.co.uk/scholar ?as_ylo=2015&q=spatio+temporal +tracking+deep+learning&hl=en&a s_sdt=0,5&authuser=1 Quantificationofthree-dimensionalsofttissueartifactsinthecaninehindlimb duringpassivestiflemotion https://doi.org/10.1186/s12917-018-1714-7 Softtissueartifactcompensation inkneekinematicsbymulti-body optimization:Performanceof subject-specifickneejoint models(2015) https://doi.org/10.1016/j.jbiomech .2015.09.040 Soft-tissue Artifact(STA) human body toosoftasmetrological platform if you start throwing IMUs to the body