SlideShare ist ein Scribd-Unternehmen logo
1 von 90
Sungkyunkwan University
Department of Human ICT Convergence
Yoon Sup Choi, Ph.D.
Digital Future of the Surgery
: how to bring the innovation of digital technology into the operating room
The Convergence of IT, BT and Medicine
Inevitable Tsunami of Change
Digital Future of the Surgery
• Wearable Devices
• Augmented Reality
• Artificial Intelligence
• 3D Printings
Wearable Devices
n
n-
ng
n
es
h-
n
ne
ne
ct
d
n-
at
s-
or
e,
ts
n
a-
gs
d
ch
Nat Biotech 2015
Google Glass How-to: Getting Started
• Technically fancy device, but skepticisms on usability
• Most functions can be achievable with smartphones
• Necessary to find specific use case, which cannot be done with smartphone
Killer Application
#ifihadglass project
In February 2013, 1,000 Glass Explorers were selected as a beta testers.
Jun 16. 2014
3 out of 5 Glass Certified Partners by Glass at Work
develop applications in medicine/healthcare
1.Ambulance
• checking medical histories from EMR
• sharing data / communication with ER
2. ER
• uploading EMR data with dictation / video recording
• consulting with specialists with sharing video in real time
3. Examination Room
• uploading EMR data with dictation / video recording
• improve patients-doctor relationships
4. Operating Room
The Connected Surgeons with Glass
The Connected Surgeons with Glass
The Connected Surgeons with Glass
The Connected Surgeons with Glass
• While performing surgery, he used Google Glass to compare patient’s CT scan.
• Google Glass doesn’t distract, like driving a car and glancing in the rearview mirror.
Dr. Pierre Theodore, a cardiothoracic surgeon at UCSF Medical Center
August 2013
“It was extraordinarily helpful.”
• Consult with a distant colleague using live video from the OR via Google Glass
• Live streamed to the laptop of the medical school students
Dr. Dr. Christopher Kaeding, Ohio State University Wexner Medical Center
August 2013
US doctor performs first live Google Glass surgery 
 Ohio State University Wexner Medical Center의 Dr. Christopher Ceding
• Consult with a distant colleague using live video from the OR via Google Glass
• Live streamed to the laptop of the medical school students
August 2013
US doctor performs first live Google Glass surgery 
UC Irvine School of Medicine first to
integrate Google Glass into curriculum
2014. 4.
UC Irvine School of Medicine is taking steps
to become the first in the nation to
integrate the wearable computer into its
four-year curriculum – from first- and
second-year anatomy courses and clinical
skills training to third- and fourth-year
hospital rotations.
Google Glass enters
operating room at Stanford
2014. 7.
Stanford University Medical
Center’s Department of Cardiothoracic
Surgery has started using Google Glass
in its resident training program.
While a resident is operating on a
patient, surgeons can use the
CrowdOptic software to watch the
resident’s progress and send visual
feedback to the resident on technique.
Augmented Reality
Augmented Reality
Augmented Reality is a technology enriching the real world with digital information and media,
such as 3D models and videos, overlaying in real-time the camera view
of your smartphone, tablet, PC or connected glasses.
Extreme future of AR?
http://gencept.com/sight-an-8-minute-augmented-reality-journey-video
http://www.ircad.fr/fr/recherche/visible-patient/
Visible Patient
3D modeling and visualization of anatomical or pathological
structures in the medical image
3D modeling and visualization of anatomical or pathological
structures in the medical image
VR Render: 3D image reconstruction guidance in surgery
https://www.youtube.com/watch?v=JJtiBA24Snc
• Surgical planning
• Training
• Share information with patient / other practitioners
• Intraoperative guidance
https://www.youtube.com/watch?v=xedzYSAT8S4
Augmented Reality: superimposing the preoperative 3D patient
modeling onto the real intraoperative view of the patient
• Identify the location of metastasized tumors in the organs
https://www.youtube.com/watch?v=xedzYSAT8S4
Augmented Reality: superimposing the preoperative 3D patient
modeling onto the real intraoperative view of the patient
VIPAAR
provides real-time, two-way, interactive video conferencing
VIPAAR: Remote Surgery Support
UsingVIPAAR, a remote surgeon is able to put his or her hands
into the surgical field and provide collaboration and assistance.
VIPAAR: Connecting Experts
VIPAAR: Connecting Experts
https://www.youtube.com/watch?v=aTOoBwfqBe0
Virtual surgery withVIPAAR and Google Glass
Artificial Intelligence
Jeopardy!
IBM Watson defeated two human champions in Jepoardy! in 2011
IBM Watson Oncology
600,000 pieces of medical evidence
2 million pages of text from 42 medical journals and clinical trials
69 guidelines, 61,540 clinical trials
IBM Watson on Medicine
Watson learned...
+
1,500 lung cancer cases
physician notes, lab results and clinical research
+
14,700 hours of hands-on training
• Treatment plans suggestions with confidence level
• Evidences behind the suggestions: articles, best practices, guidelines
• Suggestion of eligible clinical trials
IBM Watson in Korea?
2015.7.9. SNUH
DeepFace: Closing the Gap to Human-Level
Performance in FaceVerification
Taigman,Y. et al. (2014). DeepFace: Closing the Gap to Human-Level Performance in FaceVerification, CVPR’14.
Figure 2. Outline of the DeepFace architecture. A front-end of a single convolution-pooling-convolution filtering on the rectified input, followed by three
locally-connected layers and two fully-connected layers. Colors illustrate feature maps produced at each layer. The net includes more than 120 million
parameters, where more than 95% come from the local and fully connected layers.
very few parameters. These layers merely expand the input
into a set of simple local features.
The subsequent layers (L4, L5 and L6) are instead lo-
cally connected [13, 16], like a convolutional layer they ap-
ply a filter bank, but every location in the feature map learns
a different set of filters. Since different regions of an aligned
image have different local statistics, the spatial stationarity
The goal of training is to maximize the probability of
the correct class (face id). We achieve this by minimiz-
ing the cross-entropy loss for each training sample. If k
is the index of the true label for a given input, the loss is:
L = log pk. The loss is minimized over the parameters
by computing the gradient of L w.r.t. the parameters and
by updating the parameters using stochastic gradient de-
Human: 95% vs. DeepFace in Facebook: 97.35%
Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people)
FaceNet:A Unified Embedding for Face
Recognition and Clustering
Schroff, F. et al. (2015). FaceNet:A Unified Embedding for Face Recognition and Clustering
Human: 95% vs. FaceNet of Google: 99.63%
Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people)
False accept
False reject
s. This shows all pairs of images that were
on LFW. Only eight of the 13 errors shown
the other four are mislabeled in LFW.
on Youtube Faces DB
ge similarity of all pairs of the first one
our face detector detects in each video.
False accept
False reject
Figure 6. LFW errors. This shows all pairs of images that were
incorrectly classified on LFW. Only eight of the 13 errors shown
here are actual errors the other four are mislabeled in LFW.
5.7. Performance on Youtube Faces DB
We use the average similarity of all pairs of the first one
hundred frames that our face detector detects in each video.
This gives us a classification accuracy of 95.12%±0.39.
Using the first one thousand frames results in 95.18%.
Compared to [17] 91.4% who also evaluate one hundred
frames per video we reduce the error rate by almost half.
DeepId2+ [15] achieved 93.2% and our method reduces this
error by 30%, comparable to our improvement on LFW.
5.8. Face Clustering
Our compact embedding lends itself to be used in order
to cluster a users personal photos into groups of people with
the same identity. The constraints in assignment imposed
by clustering faces, compared to the pure verification task,
lead to truly amazing results. Figure 7 shows one cluster in
a users personal photo collection, generated using agglom-
erative clustering. It is a clear showcase of the incredible
invariance to occlusion, lighting, pose and even age.
Figure 7. Face Clustering. Shown is an exemplar cluster for one
user. All these images in the users personal photo collection were
clustered together.
6. Summary
We provide a method to directly learn an embedding into
an Euclidean space for face verification. This sets it apart
from other methods [15, 17] who use the CNN bottleneck
layer, or require additional post-processing such as concate-
nation of multiple models and PCA, as well as SVM clas-
sification. Our end-to-end training both simplifies the setup
and shows that directly optimizing a loss relevant to the task
at hand improves performance.
Another strength of our model is that it only requires
False accept
False reject
Figure 6. LFW errors. This shows all pairs of images that were
incorrectly classified on LFW. Only eight of the 13 errors shown
here are actual errors the other four are mislabeled in LFW.
5.7. Performance on Youtube Faces DB
We use the average similarity of all pairs of the first one
hundred frames that our face detector detects in each video.
This gives us a classification accuracy of 95.12%±0.39.
Using the first one thousand frames results in 95.18%.
Compared to [17] 91.4% who also evaluate one hundred
frames per video we reduce the error rate by almost half.
DeepId2+ [15] achieved 93.2% and our method reduces this
error by 30%, comparable to our improvement on LFW.
5.8. Face Clustering
Our compact embedding lends itself to be used in order
to cluster a users personal photos into groups of people with
the same identity. The constraints in assignment imposed
by clustering faces, compared to the pure verification task,
Figure 7. Face Clustering. Shown is an exemplar cluster for one
user. All these images in the users personal photo collection were
clustered together.
6. Summary
We provide a method to directly learn an embedding into
an Euclidean space for face verification. This sets it apart
from other methods [15, 17] who use the CNN bottleneck
layer, or require additional post-processing such as concate-
nation of multiple models and PCA, as well as SVM clas-
Business Area
Medical Image Analysis
VUNOnet and our machine learning technology will help doctors and hospitals manage
medical scans and images intelligently to make diagnosis faster and more accurately.
Original Image Automatic Segmentation EmphysemaNormal ReticularOpacity
Our system finds DILDs at the highest accuracy * DILDs: Diffuse Interstitial Lung Disease
Digital Radiologist
Digital Radiologist
Med Phys. 2013 May;40(5):051912. doi: 10.1118/1.4802214.
Constructing higher-level
contextual/relational features:
Relationships between epithelial
nuclear neighbors
Relationships between morphologically
regular and irregular nuclei
Relationships between epithelial
and stromal objects
Relationships between epithelial
nuclei and cytoplasm
Characteristics of
stromal nuclei
and stromal matrix
Characteristics of
epithelial nuclei and
epithelial cytoplasm
Building an epithelial/stromal classifier:
Epithelial vs.stroma
classifier
Epithelial vs.stroma
classifier
B
Basic image processing and feature construction:
H&E image Image broken into superpixels Nuclei identified within
each superpixel
A
Relationships of contiguous epithelial
regions with underlying nuclear objects
Learning an image-based model to predict survival
Processed images from patients Processed images from patients
C
D
onNovember17,2011stm.sciencemag.orgwnloadedfrom
TMAs contain 0.6-mm-diameter cores (median
of two cores per case) that represent only a small
sample of the full tumor. We acquired data from
two separate and independent cohorts: Nether-
lands Cancer Institute (NKI; 248 patients) and
Vancouver General Hospital (VGH; 328 patients).
Unlike previous work in cancer morphom-
etry (18–21), our image analysis pipeline was
not limited to a predefined set of morphometric
features selected by pathologists. Rather, C-Path
measures an extensive, quantitative feature set
from the breast cancer epithelium and the stro-
ma (Fig. 1). Our image processing system first
performed an automated, hierarchical scene seg-
mentation that generated thousands of measure-
ments, including both standard morphometric
descriptors of image objects and higher-level
contextual, relational, and global image features.
The pipeline consisted of three stages (Fig. 1, A
to C, and tables S8 and S9). First, we used a set of
processing steps to separate the tissue from the
background, partition the image into small regions
of coherent appearance known as superpixels,
find nuclei within the superpixels, and construct
Constructing higher-level
contextual/relational features:
Relationships between epithelial
nuclear neighbors
Relationships between morphologically
regular and irregular nuclei
Relationships between epithelial
and stromal objects
Relationships between epithelial
nuclei and cytoplasm
Characteristics of
stromal nuclei
and stromal matrix
Characteristics of
epithelial nuclei and
epithelial cytoplasm
Epithelial vs.stroma
classifier
Epithelial vs.stroma
classifier
Relationships of contiguous epithelial
regions with underlying nuclear objects
Learning an image-based model to predict survival
Processed images from patients
alive at 5 years
Processed images from patients
deceased at 5 years
L1-regularized
logisticregression
modelbuilding
5YS predictive model
Unlabeled images
Time
P(survival)
C
D
Identification of novel prognostically
important morphologic features
basic cellular morphologic properties (epithelial reg-
ular nuclei = red; epithelial atypical nuclei = pale blue;
epithelial cytoplasm = purple; stromal matrix = green;
stromal round nuclei = dark green; stromal spindled
nuclei = teal blue; unclassified regions = dark gray;
spindled nuclei in unclassified regions = yellow; round
nuclei in unclassified regions = gray; background =
white). (Left panel) After the classification of each
image object, a rich feature set is constructed. (D)
Learning an image-based model to predict survival.
Processed images from patients alive at 5 years after
surgery and from patients deceased at 5 years after
surgery were used to construct an image-based prog-
nostic model. After construction of the model, it was
applied to a test set of breast cancer images (not
used in model building) to classify patients as high
or low risk of death by 5 years.
www.ScienceTranslationalMedicine.org 9 November 2011 Vol 3 Issue 108 108ra113 2
onNovember17,2011stm.sciencemag.orgDownloadedfrom
Digital Pathologist
Sci Transl Med. 2011 Nov 9;3(108):108ra113
A comprehensive analysis of automatically quantitated morphological features could identify characteristics of prognostic relevance and provide
an accurate and reproducible means for assessing prognosis from microscopic image data.
Digital Pathologist
Sci Transl Med. 2011 Nov 9;3(108):108ra113
Top stromal features associated with survival.
primarily characterizing epithelial nuclear characteristics, such as
size, color, and texture (21, 36). In contrast, after initial filtering of im-
ages to ensure high-quality TMA images and training of the C-Path
models using expert-derived image annotations (epithelium and
stroma labels to build the epithelial-stromal classifier and survival
time and survival status to build the prognostic model), our image
analysis system is automated with no manual steps, which greatly in-
creases its scalability. Additionally, in contrast to previous approaches,
our system measures thousands of morphologic descriptors of diverse
identification of prognostic features whose significance was not pre-
viously recognized.
Using our system, we built an image-based prognostic model on
the NKI data set and showed that in this patient cohort the model
was a strong predictor of survival and provided significant additional
prognostic information to clinical, molecular, and pathological prog-
nostic factors in a multivariate model. We also demonstrated that the
image-based prognostic model, built using the NKI data set, is a strong
prognostic factor on another, independent data set with very different
SD of the ratio of the pixel intensity SD to the mean intensity
for pixels within a ring of the center of epithelial nuclei
A
The sum of the number of unclassified objects
SD of the maximum blue pixel value for atypical epithelial nuclei
Maximum distance between atypical epithelial nuclei
B
C
D
Maximum value of the minimum green pixel intensity value in
epithelial contiguous regions
Minimum elliptic fit of epithelial contiguous regions
SD of distance between epithelial cytoplasmic and nuclear objects
Average border between epithelial cytoplasmic objects
E
F
G
H
Fig. 5. Top epithelial features. The eight panels in the figure (A to H) each
shows one of the top-ranking epithelial features from the bootstrap anal-
ysis. Left panels, improved prognosis; right panels, worse prognosis. (A) SD
of the (SD of intensity/mean intensity) for pixels within a ring of the center
of epithelial nuclei. Left, relatively consistent nuclear intensity pattern (low
score); right, great nuclear intensity diversity (high score). (B) Sum of the
number of unclassified objects. Red, epithelial regions; green, stromal re-
gions; no overlaid color, unclassified region. Left, few unclassified objects
(low score); right, higher number of unclassified objects (high score). (C) SD
of the maximum blue pixel value for atypical epithelial nuclei. Left, high
score; right, low score. (D) Maximum distance between atypical epithe-
lial nuclei. Left, high score; right, low score. (Insets) Red, atypical epithelial
nuclei; black, typical epithelial nuclei. (E) Minimum elliptic fit of epithelial
contiguous regions. Left, high score; right, low score. (F) SD of distance
between epithelial cytoplasmic and nuclear objects. Left, high score; right,
low score. (G) Average border between epithelial cytoplasmic objects. Left,
high score; right, low score. (H) Maximum value of the minimum green
pixel intensity value in epithelial contiguous regions. Left, low score indi-
cating black pixels within epithelial region; right, higher score indicating
presence of epithelial regions lacking black pixels.
onNovember17,2011stm.sciencemag.orgDownloadedfrom
and stromal matrix throughout the image, with thin cords of epithe-
lial cells infiltrating through stroma across the image, so that each
stromal matrix region borders a relatively constant proportion of ep-
ithelial and stromal regions. The stromal feature with the second
largest coefficient (Fig. 4B) was the sum of the minimum green in-
tensity value of stromal-contiguous regions. This feature received a
value of zero when stromal regions contained dark pixels (such as
inflammatory nuclei). The feature received a positive value when
stromal objects were devoid of dark pixels. This feature provided in-
formation about the relationship between stromal cellular composi-
tion and prognosis and suggested that the presence of inflammatory
cells in the stroma is associated with poor prognosis, a finding con-
sistent with previous observations (32). The third most significant
stromal feature (Fig. 4C) was a measure of the relative border between
spindled stromal nuclei to round stromal nuclei, with an increased rel-
ative border of spindled stromal nuclei to round stromal nuclei asso-
ciated with worse overall survival. Although the biological underpinning
of this morphologic feature is currently not known, this analysis sug-
gested that spatial relationships between different populations of stro-
mal cell types are associated with breast cancer progression.
Reproducibility of C-Path 5YS model predictions on
samples with multiple TMA cores
For the C-Path 5YS model (which was trained on the full NKI data
set), we assessed the intrapatient agreement of model predictions when
predictions were made separately on each image contributed by pa-
tients in the VGH data set. For the 190 VGH patients who contributed
two images with complete image data, the binary predictions (high
or low risk) on the individual images agreed with each other for 69%
(131 of 190) of the cases and agreed with the prediction on the aver-
aged data for 84% (319 of 380) of the images. Using the continuous
prediction score (which ranged from 0 to 100), the median of the ab-
solute difference in prediction score among the patients with replicate
images was 5%, and the Spearman correlation among replicates was
0.27 (P = 0.0002) (fig. S3). This degree of intrapatient agreement is
only moderate, and these findings suggest significant intrapatient tumor
heterogeneity, which is a cardinal feature of breast carcinomas (33–35).
Qualitative visual inspection of images receiving discordant scores
suggested that intrapatient variability in both the epithelial and the
stromal components is likely to contribute to discordant scores for
the individual images. These differences appeared to relate both to
the proportions of the epithelium and stroma and to the appearance
of the epithelium and stroma. Last, we sought to analyze whether sur-
vival predictions were more accurate on the VGH cases that contributed
multiple cores compared to the cases that contributed only a single
core. This analysis showed that the C-Path 5YS model showed signif-
icantly improved prognostic prediction accuracy on the VGH cases
for which we had multiple images compared to the cases that con-
tributed only a single image (Fig. 7). Together, these findings show
a significant degree of intrapatient variability and indicate that increased
tumor sampling is associated with improved model performance.
DISCUSSION
Heat map of stromal matrix
objects mean abs.diff
to neighbors
H&E image separated
into epithelial and
stromal objects
A
B
C
Worse
prognosis
Improved
prognosis
Improved
prognosis
Improved
prognosis
Worse
prognosis
Worse
prognosis
Fig. 4. Top stromal features associated with survival. (A) Variability in ab-
solute difference in intensity between stromal matrix regions and neigh-
bors. Top panel, high score (24.1); bottom panel, low score (10.5). (Insets)
Top panel, high score; bottom panel; low score. Right panels, stromal matrix
objects colored blue (low), green (medium), or white (high) according to
each object’s absolute difference in intensity to neighbors. (B) Presence
R E S E A R C H A R T I C L E
onNovember17,2011stm.sciencemag.orgDownloadedfrom
Top epithelial features.The eight panels in the figure (A to H) each
shows one of the top-ranking epithelial features from the bootstrap
anal- ysis. Left panels, improved prognosis; right panels, worse prognosis.
GaussSurgical: Estimation of Blood Loss
in Surgery with iPad Camera
1. Surgical Sponge (Pixel App)
FDA 510 (k) Clearance in 2012
2. Suction Container (Triton App)
FDA 510 (k) Clearance in March 2015
3D Printers
Replicator
• 3D object is constructed by adding material in layers (usually sprayed)
• Materials: rubber, plastics, paper, polyurethane, metals, and even cells
3D printers: Replicators in the real world
‘Liberator’, the 3D Printed Gun
$25, 3-D printed handgun
Winsun: 3D Printed House
Winsun: 3D Printed House
3D Printed Hearing Aid
http://www.telegraph.co.uk/technology/news/9066721/3D-printer-builds-new-jaw-bone-for-transplant.html
• The artificial jaw was made from titanium powder, heated and built-up in
layers in a 3D printer to create a working lower jaw which was then
finished with a bioceramic coating.
• The implant was fitted in an operation in the Netherlands in June 2011.
3D Printed Jaw
3D Printed Jaw
Tracheobronchomalacia (TBM)
Bioresorbable Airway Splint
Created with a Three-Dimensional Printer
Bioresorbable Airway Splint
Created with a Three-Dimensional Printer
N Engl J Med 2013; 368:2043-2045
• A custom-designed and custom-fabricated resorbable airway splint, which was
manufactured from polycaprolactone with the use of a 3D printer
• Our bellowed topology design provides resistance against collapse while
simultaneously allowing flexion, extension, and expansion with growth.
N Engl J Med 2013; 368:2043-2045
One year after surgery, imaging and
endoscopy showed a patent left
mainstem bronchus
Morrison RJ et al. Sci Transl Med. 2015
Fig. 1. Computational image-
based design of 3D-printed tracheo-
bronchialsplints.(A)Stereolithography
(.STL) representation (top) and virtual
rendering (bottom) of the tracheo-
bronchial splint demonstrating the
bounded design parameters of the
device. We used a fixed open angle
of 90° to allow placement of the de-
vice over the airway. Inner diameter,
length, wall thickness, and number
and spacing of suture holes were
adjusted according to patient anato-
my (Table 1) and can be adjusted on
the submillimeter scale. Bellow height
and periodicity (ribbing) can be
adjusted to allow additional flexion
of the device in the z axis. (B) Mecha-
nismofactionofthetracheobronchial
splint intreatingtracheobronchialcol-
lapse in TBM. Solid arrows denote
positive intrathoracic pressure gener-
ated on expiration. Hollow arrow de-
notes vector of tracheobronchial
collapse. Dashed arrow denotes
vector of opening wedge displace-
ment of the tracheobronchial splint
with airway growth. (C) Digital Imag-
ingandCommunicationsinMedicine
(DICOM) images of the patient’s CT
scan were used to generate a 3D
model of the patient’s airway via seg-
mentation in Mimics. A centerline
was fit within the affected segment
of the airway, and measurements of
airway hydraulic diameter (DH) and
length were used as design param-
eters to generate the device design.
(D) Design parameters were input
into MATLAB to generate an output
as a series of 2D. TIFF image slices
using Fourier series representation.
Light and gray areas indicate struc-
tural components; dark areas are
voids. The top image demonstrates
a device bellow, and the bottom
image demonstrates suture holes in-
corporated into the device design.
The .TIFF images were imported into
Mimics to generate an. STL of the final
splint design. (E) Virtual assessment of
fit of tracheobronchial splint over
segmented primary airway model
for all patients. (F) Final 3D-printed
PCL tracheobronchial splint used to
treat the left bronchus of patient 2.
The splint incorporated a 90° spiral
to the open angle of the device to
accommodate concurrent use of a
right bronchial splint and growth
of the right bronchus.
R E S E A R C H A R T I C L E
Mitigation of tracheobronchomalacia with 3D-printed
personalized medical devices in pediatric patients
DISCUSSION
We report successful implantation of 3D-printed, patient-specific bio-
resorbable airway splints for treatment of severe TBM. The personalized
splints conformed to the patients’ individual geometries and expanded
compression) (20). Thus, we defined our maximum compressive allow-
ance as less than 50% deformation under a 20-N load. However, a sim-
ilar degree of bending compliance was too low for the splint to be
effective at maintaining airway patency. We expected that under a
20-N load, the splint should allow greater than 20% displacement in
bending to accommodate flexion of the airway but less than 50%
displacement (greater than which may interrupt airflow).
Fig. 2. Pre- and postoperative imaging of patients. Black arrrows in all figures denote location of
the malcic segment of the airway. White arrows designate the location/presence of the tracheo-
bronchial splint. Asterisk denotes focal degradation of splint. All CT images are coronal minimum
intensity projection (MinIP) reformatted images of the lung and airway on expiration. All MRI images
are axial proton density turbospin echo MRI images of the chest. (A) Preoperative (top) and 1-month
postoperative (upper middle) CT images of patient 1. Postoperative MRI (lower middle) demonstrated
presence of splint around left bronchus in patient 1 at 12 months and focal fragmentation of splint due
to degradation at 39 months (bottom). (B) Preoperative (top) and 1-month postoperative (upper mid-
dle) CT images of patient 2. Postoperative MRI (lower middle) demonstrated presence of splints around the left and right bronchi in patient 2 at 1 month. Note
that the patient had bilateral mainstem bronchomalacia and received a tracheobronchial splint on both the left and right mainstem bronchus. (C) Preoperative
(top) and 1-month postoperative (bottom) CT images of patient 3.
www.ScienceTranslationalMedicine.org 29 April 2015 Vol 7 Issue 285 285ra64 5
Morrison RJ et al. Sci Transl Med. 2015
Mitigation of tracheobronchomalacia with 3D-printed
personalized medical devices in pediatric patients
pressure (table S2). Patient airway image–based computational design
coupled with 3D printing allowed rapid production of these devices. The
regulatory approval process and evaluation of patient candidacy needed 7
days. All devices were completed within this time frame. Design and
MATERIALS AND METHODS
Study design
Our hypothesis was that an external splint could be designed to obtain
Fig. 4. Mean airway caliber over time. Patient airway DH was
measured over time after implantation of the 3D-printed bioresorbable
material. Solid lines denote bronchi that received the tracheobronchial
splint. Dashed lines are normal, contralateral bronchi for patients 1 and
3. All caliber measurements were made on expiratory-phase CT imaging
using the centerline function of each isolated bronchus in Mimics. The
centerline function measures DH every 0.1 to 1.0 mm along the entire
segment of the isolated model. Measurements are represented as
averages of all measurements along the length of the isolated affected
bronchus model ± SD. Pre-op, preoperative.
R E S E A R C H A R T I C L E
Morrison RJ et al. Sci Transl Med. 2015
Mitigation of tracheobronchomalacia with 3D-printed
personalized medical devices in pediatric patients
3D Printed Skull
• A 22-year-old female from the Netherlands
• A chronic bone disorder, which has increased the thickness of her skull
from 1.5 to 5cm causing reduced eyesight and severe headaches.
• Top section of skull was removed and replaced with a 3D printed implant.
March 2014
3D Printed Skull
• Since the operation, the patient has gained her sight back entirely,
is symptom-free and back to work.
March 2014
by prof. Hyung Jin Choi (SNU)
3D printers for the anatomy education
You cannot physically touch the 3D simulated models
by prof. Hyung Jin Choi (SNU)
3D printers for the anatomy education
by prof. Hyung Jin Choi (SNU)
3D printers for anatomy education
Digital Future of the Surgery
• Wearable Devices
• Augmented Reality
• Artificial Intelligence
• 3D Printings
Feedback/Questions
• Email: yoonsup.choi@gmail.com
• Blog: http://www.yoonsupchoi.com
• Facebook: Yoon Sup Choi

Weitere ähnliche Inhalte

Was ist angesagt?

virtual simulation
virtual simulationvirtual simulation
virtual simulationSumer Yadav
 
Artificial Intelligence & Robotic Surgery 1.pptx
Artificial Intelligence & Robotic Surgery 1.pptxArtificial Intelligence & Robotic Surgery 1.pptx
Artificial Intelligence & Robotic Surgery 1.pptxLuqman Osman
 
Virtual Reality in Healthcare
Virtual Reality in HealthcareVirtual Reality in Healthcare
Virtual Reality in HealthcareRick Krohn
 
Artificial intelligence in radiology
Artificial intelligence in radiologyArtificial intelligence in radiology
Artificial intelligence in radiologyDev Lakhera
 
Robotic Surgery(minimally invasive surgery)
Robotic Surgery(minimally invasive surgery)Robotic Surgery(minimally invasive surgery)
Robotic Surgery(minimally invasive surgery)Sgtm Saha
 
Essentials of lap
Essentials of lapEssentials of lap
Essentials of lapHome
 
BASICS PRINCIPLES OF LAPROSCOPY
BASICS PRINCIPLES OF LAPROSCOPYBASICS PRINCIPLES OF LAPROSCOPY
BASICS PRINCIPLES OF LAPROSCOPYDr Dhara Pandya
 
Robotic surgery and its concepts
Robotic surgery and its conceptsRobotic surgery and its concepts
Robotic surgery and its conceptssowjanyanarsingu
 
Virtual Surgery
Virtual SurgeryVirtual Surgery
Virtual Surgerybiomedicz
 
Laparoscopy for acute abdominal conditions brazil 2014
Laparoscopy for acute abdominal  conditions   brazil 2014Laparoscopy for acute abdominal  conditions   brazil 2014
Laparoscopy for acute abdominal conditions brazil 2014bajuarez
 
Robotic Surgery by muthugomathy and meenakshi shetti.
Robotic Surgery by muthugomathy and meenakshi shetti.Robotic Surgery by muthugomathy and meenakshi shetti.
Robotic Surgery by muthugomathy and meenakshi shetti.Qualcomm
 
Laparoscopy & its Ergonomics by Dr.Mohammad Zarin
Laparoscopy & its Ergonomics by Dr.Mohammad ZarinLaparoscopy & its Ergonomics by Dr.Mohammad Zarin
Laparoscopy & its Ergonomics by Dr.Mohammad ZarinWaqas Khalil
 
Single incision laparoscopic Surgery-SILS
Single incision laparoscopic Surgery-SILSSingle incision laparoscopic Surgery-SILS
Single incision laparoscopic Surgery-SILSrkmishra14
 
Robotics in surgery by DR.Mahipal reddy
Robotics in surgery  by DR.Mahipal reddyRobotics in surgery  by DR.Mahipal reddy
Robotics in surgery by DR.Mahipal reddymahipal33
 
ARTIFICIAL INTELLIGENCE(AI) IN RADIOLOGY.pptx
ARTIFICIAL INTELLIGENCE(AI) IN RADIOLOGY.pptxARTIFICIAL INTELLIGENCE(AI) IN RADIOLOGY.pptx
ARTIFICIAL INTELLIGENCE(AI) IN RADIOLOGY.pptxHillaryFrancis
 

Was ist angesagt? (20)

virtual simulation
virtual simulationvirtual simulation
virtual simulation
 
Artificial Intelligence & Robotic Surgery 1.pptx
Artificial Intelligence & Robotic Surgery 1.pptxArtificial Intelligence & Robotic Surgery 1.pptx
Artificial Intelligence & Robotic Surgery 1.pptx
 
Robotic surgery
Robotic surgery Robotic surgery
Robotic surgery
 
Virtual Reality in Healthcare
Virtual Reality in HealthcareVirtual Reality in Healthcare
Virtual Reality in Healthcare
 
Artificial intelligence in radiology
Artificial intelligence in radiologyArtificial intelligence in radiology
Artificial intelligence in radiology
 
Robotic Surgery(minimally invasive surgery)
Robotic Surgery(minimally invasive surgery)Robotic Surgery(minimally invasive surgery)
Robotic Surgery(minimally invasive surgery)
 
Robotic surgery
Robotic surgeryRobotic surgery
Robotic surgery
 
Essentials of lap
Essentials of lapEssentials of lap
Essentials of lap
 
BASICS PRINCIPLES OF LAPROSCOPY
BASICS PRINCIPLES OF LAPROSCOPYBASICS PRINCIPLES OF LAPROSCOPY
BASICS PRINCIPLES OF LAPROSCOPY
 
Robotic surgery
Robotic surgeryRobotic surgery
Robotic surgery
 
Robotic surgery and its concepts
Robotic surgery and its conceptsRobotic surgery and its concepts
Robotic surgery and its concepts
 
Virtual Surgery
Virtual SurgeryVirtual Surgery
Virtual Surgery
 
Robotic Surgery
Robotic SurgeryRobotic Surgery
Robotic Surgery
 
Laparoscopy for acute abdominal conditions brazil 2014
Laparoscopy for acute abdominal  conditions   brazil 2014Laparoscopy for acute abdominal  conditions   brazil 2014
Laparoscopy for acute abdominal conditions brazil 2014
 
Robotic Surgery by muthugomathy and meenakshi shetti.
Robotic Surgery by muthugomathy and meenakshi shetti.Robotic Surgery by muthugomathy and meenakshi shetti.
Robotic Surgery by muthugomathy and meenakshi shetti.
 
Laparoscopy & its Ergonomics by Dr.Mohammad Zarin
Laparoscopy & its Ergonomics by Dr.Mohammad ZarinLaparoscopy & its Ergonomics by Dr.Mohammad Zarin
Laparoscopy & its Ergonomics by Dr.Mohammad Zarin
 
Robotic surgery - Principles
Robotic surgery - PrinciplesRobotic surgery - Principles
Robotic surgery - Principles
 
Single incision laparoscopic Surgery-SILS
Single incision laparoscopic Surgery-SILSSingle incision laparoscopic Surgery-SILS
Single incision laparoscopic Surgery-SILS
 
Robotics in surgery by DR.Mahipal reddy
Robotics in surgery  by DR.Mahipal reddyRobotics in surgery  by DR.Mahipal reddy
Robotics in surgery by DR.Mahipal reddy
 
ARTIFICIAL INTELLIGENCE(AI) IN RADIOLOGY.pptx
ARTIFICIAL INTELLIGENCE(AI) IN RADIOLOGY.pptxARTIFICIAL INTELLIGENCE(AI) IN RADIOLOGY.pptx
ARTIFICIAL INTELLIGENCE(AI) IN RADIOLOGY.pptx
 

Andere mochten auch

디지털 헬스케어의 잠재적 규제 이슈
디지털 헬스케어의 잠재적 규제 이슈 디지털 헬스케어의 잠재적 규제 이슈
디지털 헬스케어의 잠재적 규제 이슈 Yoon Sup Choi
 
인공지능은 의료를 어떻게 혁신할 것인가 (ver 2)
인공지능은 의료를 어떻게 혁신할 것인가 (ver 2)인공지능은 의료를 어떻게 혁신할 것인가 (ver 2)
인공지능은 의료를 어떻게 혁신할 것인가 (ver 2)Yoon Sup Choi
 
디지털 헬스케어 글로벌 동향: 2016년 1분기 (Global Digital Healthcare Trends: 2016 1Q)
디지털 헬스케어 글로벌 동향: 2016년 1분기 (Global Digital Healthcare Trends: 2016 1Q)디지털 헬스케어 글로벌 동향: 2016년 1분기 (Global Digital Healthcare Trends: 2016 1Q)
디지털 헬스케어 글로벌 동향: 2016년 1분기 (Global Digital Healthcare Trends: 2016 1Q)Yoon Sup Choi
 
Future of-surgery-r satava-0606
Future of-surgery-r satava-0606Future of-surgery-r satava-0606
Future of-surgery-r satava-0606Pranaya Krishna
 
그렇게 나는 스스로 기업이 되었다
그렇게 나는 스스로 기업이 되었다그렇게 나는 스스로 기업이 되었다
그렇게 나는 스스로 기업이 되었다Yoon Sup Choi
 
Recent advances and challenges of digital mental healthcare
Recent advances and challenges of digital mental healthcareRecent advances and challenges of digital mental healthcare
Recent advances and challenges of digital mental healthcareYoon Sup Choi
 
Digital health in diabetes: a global perspective
Digital health in diabetes: a global perspectiveDigital health in diabetes: a global perspective
Digital health in diabetes: a global perspectiveYoon Sup Choi
 
How to Implement the Digital Medicine in the Future
How to Implement the Digital Medicine in the FutureHow to Implement the Digital Medicine in the Future
How to Implement the Digital Medicine in the FutureYoon Sup Choi
 
디지털 헬스케어를 어떻게 구현할 것인가: 국내 스타트업 업계를 중심으로
디지털 헬스케어를 어떻게 구현할 것인가: 국내 스타트업 업계를 중심으로디지털 헬스케어를 어떻게 구현할 것인가: 국내 스타트업 업계를 중심으로
디지털 헬스케어를 어떻게 구현할 것인가: 국내 스타트업 업계를 중심으로Yoon Sup Choi
 
Google’s PRPL Web development pattern
Google’s PRPL Web development patternGoogle’s PRPL Web development pattern
Google’s PRPL Web development patternJeongkyu Shin
 
원격 의료 산업의 글로벌 동향 및 주요 이슈
원격 의료 산업의 글로벌 동향 및 주요 이슈원격 의료 산업의 글로벌 동향 및 주요 이슈
원격 의료 산업의 글로벌 동향 및 주요 이슈Yoon Sup Choi
 
디지털 헬스케어 파트너스 (DHP) 소개
디지털 헬스케어 파트너스 (DHP) 소개디지털 헬스케어 파트너스 (DHP) 소개
디지털 헬스케어 파트너스 (DHP) 소개Yoon Sup Choi
 
성공하는 디지털 헬스케어 스타트업을 위한 8가지 조언
성공하는 디지털 헬스케어 스타트업을 위한 8가지 조언성공하는 디지털 헬스케어 스타트업을 위한 8가지 조언
성공하는 디지털 헬스케어 스타트업을 위한 8가지 조언Yoon Sup Choi
 
Connected Health Conference 2016 Review by DHP
Connected Health Conference 2016 Review by DHPConnected Health Conference 2016 Review by DHP
Connected Health Conference 2016 Review by DHPYoon Sup Choi
 
How to Make Awesome SlideShares: Tips & Tricks
How to Make Awesome SlideShares: Tips & TricksHow to Make Awesome SlideShares: Tips & Tricks
How to Make Awesome SlideShares: Tips & TricksSlideShare
 
Getting Started With SlideShare
Getting Started With SlideShareGetting Started With SlideShare
Getting Started With SlideShareSlideShare
 

Andere mochten auch (16)

디지털 헬스케어의 잠재적 규제 이슈
디지털 헬스케어의 잠재적 규제 이슈 디지털 헬스케어의 잠재적 규제 이슈
디지털 헬스케어의 잠재적 규제 이슈
 
인공지능은 의료를 어떻게 혁신할 것인가 (ver 2)
인공지능은 의료를 어떻게 혁신할 것인가 (ver 2)인공지능은 의료를 어떻게 혁신할 것인가 (ver 2)
인공지능은 의료를 어떻게 혁신할 것인가 (ver 2)
 
디지털 헬스케어 글로벌 동향: 2016년 1분기 (Global Digital Healthcare Trends: 2016 1Q)
디지털 헬스케어 글로벌 동향: 2016년 1분기 (Global Digital Healthcare Trends: 2016 1Q)디지털 헬스케어 글로벌 동향: 2016년 1분기 (Global Digital Healthcare Trends: 2016 1Q)
디지털 헬스케어 글로벌 동향: 2016년 1분기 (Global Digital Healthcare Trends: 2016 1Q)
 
Future of-surgery-r satava-0606
Future of-surgery-r satava-0606Future of-surgery-r satava-0606
Future of-surgery-r satava-0606
 
그렇게 나는 스스로 기업이 되었다
그렇게 나는 스스로 기업이 되었다그렇게 나는 스스로 기업이 되었다
그렇게 나는 스스로 기업이 되었다
 
Recent advances and challenges of digital mental healthcare
Recent advances and challenges of digital mental healthcareRecent advances and challenges of digital mental healthcare
Recent advances and challenges of digital mental healthcare
 
Digital health in diabetes: a global perspective
Digital health in diabetes: a global perspectiveDigital health in diabetes: a global perspective
Digital health in diabetes: a global perspective
 
How to Implement the Digital Medicine in the Future
How to Implement the Digital Medicine in the FutureHow to Implement the Digital Medicine in the Future
How to Implement the Digital Medicine in the Future
 
디지털 헬스케어를 어떻게 구현할 것인가: 국내 스타트업 업계를 중심으로
디지털 헬스케어를 어떻게 구현할 것인가: 국내 스타트업 업계를 중심으로디지털 헬스케어를 어떻게 구현할 것인가: 국내 스타트업 업계를 중심으로
디지털 헬스케어를 어떻게 구현할 것인가: 국내 스타트업 업계를 중심으로
 
Google’s PRPL Web development pattern
Google’s PRPL Web development patternGoogle’s PRPL Web development pattern
Google’s PRPL Web development pattern
 
원격 의료 산업의 글로벌 동향 및 주요 이슈
원격 의료 산업의 글로벌 동향 및 주요 이슈원격 의료 산업의 글로벌 동향 및 주요 이슈
원격 의료 산업의 글로벌 동향 및 주요 이슈
 
디지털 헬스케어 파트너스 (DHP) 소개
디지털 헬스케어 파트너스 (DHP) 소개디지털 헬스케어 파트너스 (DHP) 소개
디지털 헬스케어 파트너스 (DHP) 소개
 
성공하는 디지털 헬스케어 스타트업을 위한 8가지 조언
성공하는 디지털 헬스케어 스타트업을 위한 8가지 조언성공하는 디지털 헬스케어 스타트업을 위한 8가지 조언
성공하는 디지털 헬스케어 스타트업을 위한 8가지 조언
 
Connected Health Conference 2016 Review by DHP
Connected Health Conference 2016 Review by DHPConnected Health Conference 2016 Review by DHP
Connected Health Conference 2016 Review by DHP
 
How to Make Awesome SlideShares: Tips & Tricks
How to Make Awesome SlideShares: Tips & TricksHow to Make Awesome SlideShares: Tips & Tricks
How to Make Awesome SlideShares: Tips & Tricks
 
Getting Started With SlideShare
Getting Started With SlideShareGetting Started With SlideShare
Getting Started With SlideShare
 

Ähnlich wie Digital Future of the Surgery: Brining the Innovation of Digital Technology into Operating Room

AI-based Mechanism to Authorise Beneficiaries at Covid Vaccination Camps usin...
AI-based Mechanism to Authorise Beneficiaries at Covid Vaccination Camps usin...AI-based Mechanism to Authorise Beneficiaries at Covid Vaccination Camps usin...
AI-based Mechanism to Authorise Beneficiaries at Covid Vaccination Camps usin...IRJET Journal
 
FACE MASK DETECTION AND COUNTER IN THINGSPEAK WITH EMAIL ALERT SYSTEM FOR COV...
FACE MASK DETECTION AND COUNTER IN THINGSPEAK WITH EMAIL ALERT SYSTEM FOR COV...FACE MASK DETECTION AND COUNTER IN THINGSPEAK WITH EMAIL ALERT SYSTEM FOR COV...
FACE MASK DETECTION AND COUNTER IN THINGSPEAK WITH EMAIL ALERT SYSTEM FOR COV...IRJET Journal
 
Face and liveness detection with criminal identification using machine learni...
Face and liveness detection with criminal identification using machine learni...Face and liveness detection with criminal identification using machine learni...
Face and liveness detection with criminal identification using machine learni...IAESIJAI
 
Slima explainable deep learning using fuzzy logic human ist u fribourg ver 17...
Slima explainable deep learning using fuzzy logic human ist u fribourg ver 17...Slima explainable deep learning using fuzzy logic human ist u fribourg ver 17...
Slima explainable deep learning using fuzzy logic human ist u fribourg ver 17...Servio Fernando Lima Reina
 
Real Time Image Based Attendance System using Python
Real Time Image Based Attendance System using PythonReal Time Image Based Attendance System using Python
Real Time Image Based Attendance System using PythonIRJET Journal
 
Face in video evaluation (five)
Face in video evaluation (five)Face in video evaluation (five)
Face in video evaluation (five)Sungkwan Park
 
IRJET- Human Face Recognition in Video using Convolutional Neural Network (CNN)
IRJET- Human Face Recognition in Video using Convolutional Neural Network (CNN)IRJET- Human Face Recognition in Video using Convolutional Neural Network (CNN)
IRJET- Human Face Recognition in Video using Convolutional Neural Network (CNN)IRJET Journal
 
IRJET- Real-Time Partial Face Occlusion Detection using Matlab
IRJET-	 Real-Time Partial Face Occlusion Detection using MatlabIRJET-	 Real-Time Partial Face Occlusion Detection using Matlab
IRJET- Real-Time Partial Face Occlusion Detection using MatlabIRJET Journal
 
EE-2018-1303261-1.pdf
EE-2018-1303261-1.pdfEE-2018-1303261-1.pdf
EE-2018-1303261-1.pdfUmarDrazKhan2
 
Collaborative Techniques to Design and Market 3D Virtual Healthcare Simulatio...
Collaborative Techniques to Design and Market 3D Virtual Healthcare Simulatio...Collaborative Techniques to Design and Market 3D Virtual Healthcare Simulatio...
Collaborative Techniques to Design and Market 3D Virtual Healthcare Simulatio...SeriousGamesAssoc
 
Image recognition technology (Medical Presentation)
Image recognition technology (Medical Presentation)Image recognition technology (Medical Presentation)
Image recognition technology (Medical Presentation)saravanan guru
 
A SURVEY ON BLOOD DISEASE DETECTION USING MACHINE LEARNING
A SURVEY ON BLOOD DISEASE DETECTION USING MACHINE LEARNINGA SURVEY ON BLOOD DISEASE DETECTION USING MACHINE LEARNING
A SURVEY ON BLOOD DISEASE DETECTION USING MACHINE LEARNINGIRJET Journal
 
Tele-Robotic Assisted Dental Implant Surgery with Virtual Force Feedback
Tele-Robotic Assisted Dental Implant Surgery with Virtual Force FeedbackTele-Robotic Assisted Dental Implant Surgery with Virtual Force Feedback
Tele-Robotic Assisted Dental Implant Surgery with Virtual Force FeedbackNooria Sukmaningtyas
 
IRJET- Breast Cancer Prediction using Deep Learning
IRJET-  	  Breast Cancer Prediction using Deep LearningIRJET-  	  Breast Cancer Prediction using Deep Learning
IRJET- Breast Cancer Prediction using Deep LearningIRJET Journal
 
Detecting anomalies in security cameras with 3D-convolutional neural network ...
Detecting anomalies in security cameras with 3D-convolutional neural network ...Detecting anomalies in security cameras with 3D-convolutional neural network ...
Detecting anomalies in security cameras with 3D-convolutional neural network ...IJECEIAES
 
DATI, AI E ROBOTICA @POLITO
DATI, AI E ROBOTICA @POLITODATI, AI E ROBOTICA @POLITO
DATI, AI E ROBOTICA @POLITOMarcoMellia
 
IRJET- Survey on Face Recognition using Biometrics
IRJET-  	  Survey on Face Recognition using BiometricsIRJET-  	  Survey on Face Recognition using Biometrics
IRJET- Survey on Face Recognition using BiometricsIRJET Journal
 
Realtime Face mask Detector using YoloV4
Realtime Face mask Detector using YoloV4Realtime Face mask Detector using YoloV4
Realtime Face mask Detector using YoloV4IRJET Journal
 
Machine Learning Ml Overview Algorithms Use Cases And Applications
Machine Learning Ml Overview Algorithms Use Cases And ApplicationsMachine Learning Ml Overview Algorithms Use Cases And Applications
Machine Learning Ml Overview Algorithms Use Cases And ApplicationsSlideTeam
 

Ähnlich wie Digital Future of the Surgery: Brining the Innovation of Digital Technology into Operating Room (20)

AI-based Mechanism to Authorise Beneficiaries at Covid Vaccination Camps usin...
AI-based Mechanism to Authorise Beneficiaries at Covid Vaccination Camps usin...AI-based Mechanism to Authorise Beneficiaries at Covid Vaccination Camps usin...
AI-based Mechanism to Authorise Beneficiaries at Covid Vaccination Camps usin...
 
FACE MASK DETECTION AND COUNTER IN THINGSPEAK WITH EMAIL ALERT SYSTEM FOR COV...
FACE MASK DETECTION AND COUNTER IN THINGSPEAK WITH EMAIL ALERT SYSTEM FOR COV...FACE MASK DETECTION AND COUNTER IN THINGSPEAK WITH EMAIL ALERT SYSTEM FOR COV...
FACE MASK DETECTION AND COUNTER IN THINGSPEAK WITH EMAIL ALERT SYSTEM FOR COV...
 
Face and liveness detection with criminal identification using machine learni...
Face and liveness detection with criminal identification using machine learni...Face and liveness detection with criminal identification using machine learni...
Face and liveness detection with criminal identification using machine learni...
 
Slima explainable deep learning using fuzzy logic human ist u fribourg ver 17...
Slima explainable deep learning using fuzzy logic human ist u fribourg ver 17...Slima explainable deep learning using fuzzy logic human ist u fribourg ver 17...
Slima explainable deep learning using fuzzy logic human ist u fribourg ver 17...
 
Real Time Image Based Attendance System using Python
Real Time Image Based Attendance System using PythonReal Time Image Based Attendance System using Python
Real Time Image Based Attendance System using Python
 
Face in video evaluation (five)
Face in video evaluation (five)Face in video evaluation (five)
Face in video evaluation (five)
 
IRJET- Human Face Recognition in Video using Convolutional Neural Network (CNN)
IRJET- Human Face Recognition in Video using Convolutional Neural Network (CNN)IRJET- Human Face Recognition in Video using Convolutional Neural Network (CNN)
IRJET- Human Face Recognition in Video using Convolutional Neural Network (CNN)
 
IRJET- Real-Time Partial Face Occlusion Detection using Matlab
IRJET-	 Real-Time Partial Face Occlusion Detection using MatlabIRJET-	 Real-Time Partial Face Occlusion Detection using Matlab
IRJET- Real-Time Partial Face Occlusion Detection using Matlab
 
EE-2018-1303261-1.pdf
EE-2018-1303261-1.pdfEE-2018-1303261-1.pdf
EE-2018-1303261-1.pdf
 
2015-fall
2015-fall2015-fall
2015-fall
 
Collaborative Techniques to Design and Market 3D Virtual Healthcare Simulatio...
Collaborative Techniques to Design and Market 3D Virtual Healthcare Simulatio...Collaborative Techniques to Design and Market 3D Virtual Healthcare Simulatio...
Collaborative Techniques to Design and Market 3D Virtual Healthcare Simulatio...
 
Image recognition technology (Medical Presentation)
Image recognition technology (Medical Presentation)Image recognition technology (Medical Presentation)
Image recognition technology (Medical Presentation)
 
A SURVEY ON BLOOD DISEASE DETECTION USING MACHINE LEARNING
A SURVEY ON BLOOD DISEASE DETECTION USING MACHINE LEARNINGA SURVEY ON BLOOD DISEASE DETECTION USING MACHINE LEARNING
A SURVEY ON BLOOD DISEASE DETECTION USING MACHINE LEARNING
 
Tele-Robotic Assisted Dental Implant Surgery with Virtual Force Feedback
Tele-Robotic Assisted Dental Implant Surgery with Virtual Force FeedbackTele-Robotic Assisted Dental Implant Surgery with Virtual Force Feedback
Tele-Robotic Assisted Dental Implant Surgery with Virtual Force Feedback
 
IRJET- Breast Cancer Prediction using Deep Learning
IRJET-  	  Breast Cancer Prediction using Deep LearningIRJET-  	  Breast Cancer Prediction using Deep Learning
IRJET- Breast Cancer Prediction using Deep Learning
 
Detecting anomalies in security cameras with 3D-convolutional neural network ...
Detecting anomalies in security cameras with 3D-convolutional neural network ...Detecting anomalies in security cameras with 3D-convolutional neural network ...
Detecting anomalies in security cameras with 3D-convolutional neural network ...
 
DATI, AI E ROBOTICA @POLITO
DATI, AI E ROBOTICA @POLITODATI, AI E ROBOTICA @POLITO
DATI, AI E ROBOTICA @POLITO
 
IRJET- Survey on Face Recognition using Biometrics
IRJET-  	  Survey on Face Recognition using BiometricsIRJET-  	  Survey on Face Recognition using Biometrics
IRJET- Survey on Face Recognition using Biometrics
 
Realtime Face mask Detector using YoloV4
Realtime Face mask Detector using YoloV4Realtime Face mask Detector using YoloV4
Realtime Face mask Detector using YoloV4
 
Machine Learning Ml Overview Algorithms Use Cases And Applications
Machine Learning Ml Overview Algorithms Use Cases And ApplicationsMachine Learning Ml Overview Algorithms Use Cases And Applications
Machine Learning Ml Overview Algorithms Use Cases And Applications
 

Mehr von Yoon Sup Choi

한국 원격의료 산업의 주요 이슈
한국 원격의료 산업의 주요 이슈한국 원격의료 산업의 주요 이슈
한국 원격의료 산업의 주요 이슈Yoon Sup Choi
 
원격의료 시대의 디지털 치료제
원격의료 시대의 디지털 치료제원격의료 시대의 디지털 치료제
원격의료 시대의 디지털 치료제Yoon Sup Choi
 
[KNAPS] 포스트 코로나 시대, 제약 산업과 디지털 헬스케어
[KNAPS] 포스트 코로나 시대, 제약 산업과 디지털 헬스케어[KNAPS] 포스트 코로나 시대, 제약 산업과 디지털 헬스케어
[KNAPS] 포스트 코로나 시대, 제약 산업과 디지털 헬스케어Yoon Sup Choi
 
디지털 헬스케어 파트너스 (DHP) 소개 자료
디지털 헬스케어 파트너스 (DHP) 소개 자료디지털 헬스케어 파트너스 (DHP) 소개 자료
디지털 헬스케어 파트너스 (DHP) 소개 자료Yoon Sup Choi
 
[대한병리학회] 의료 인공지능 101: 병리를 중심으로
[대한병리학회] 의료 인공지능 101: 병리를 중심으로[대한병리학회] 의료 인공지능 101: 병리를 중심으로
[대한병리학회] 의료 인공지능 101: 병리를 중심으로Yoon Sup Choi
 
한국 디지털 헬스케어의 생존을 위한 규제 혁신에 대한 고언
한국 디지털 헬스케어의 생존을 위한 규제 혁신에 대한 고언한국 디지털 헬스케어의 생존을 위한 규제 혁신에 대한 고언
한국 디지털 헬스케어의 생존을 위한 규제 혁신에 대한 고언Yoon Sup Choi
 
원격의료에 대한 생각, 그리고 그 생각에 대한 생각
원격의료에 대한 생각, 그리고 그 생각에 대한 생각원격의료에 대한 생각, 그리고 그 생각에 대한 생각
원격의료에 대한 생각, 그리고 그 생각에 대한 생각Yoon Sup Choi
 
[C&C] 의료의 미래 디지털 헬스케어
[C&C] 의료의 미래 디지털 헬스케어[C&C] 의료의 미래 디지털 헬스케어
[C&C] 의료의 미래 디지털 헬스케어Yoon Sup Choi
 
포스트 코로나 시대, 혁신적인 디지털 헬스케어 기업의 조건
포스트 코로나 시대, 혁신적인 디지털 헬스케어 기업의 조건포스트 코로나 시대, 혁신적인 디지털 헬스케어 기업의 조건
포스트 코로나 시대, 혁신적인 디지털 헬스케어 기업의 조건Yoon Sup Choi
 
디지털 치료제, 또 하나의 신약
디지털 치료제, 또 하나의 신약디지털 치료제, 또 하나의 신약
디지털 치료제, 또 하나의 신약Yoon Sup Choi
 
[365mc] 디지털 헬스케어: 의료의 미래
[365mc] 디지털 헬스케어: 의료의 미래[365mc] 디지털 헬스케어: 의료의 미래
[365mc] 디지털 헬스케어: 의료의 미래Yoon Sup Choi
 
디지털 치료제, 또 하나의 신약
디지털 치료제, 또 하나의 신약디지털 치료제, 또 하나의 신약
디지털 치료제, 또 하나의 신약Yoon Sup Choi
 
[ASGO 2019] Artificial Intelligence in Medicine
[ASGO 2019] Artificial Intelligence in Medicine[ASGO 2019] Artificial Intelligence in Medicine
[ASGO 2019] Artificial Intelligence in MedicineYoon Sup Choi
 
글로벌 디지털 헬스케어 산업 및 규제 동향
글로벌 디지털 헬스케어 산업 및 규제 동향 글로벌 디지털 헬스케어 산업 및 규제 동향
글로벌 디지털 헬스케어 산업 및 규제 동향 Yoon Sup Choi
 
인허가 이후에도 변화하는 AI/ML 기반 SaMD를 어떻게 규제할 것인가
인허가 이후에도 변화하는 AI/ML 기반 SaMD를 어떻게 규제할 것인가인허가 이후에도 변화하는 AI/ML 기반 SaMD를 어떻게 규제할 것인가
인허가 이후에도 변화하는 AI/ML 기반 SaMD를 어떻게 규제할 것인가Yoon Sup Choi
 
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (상)
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (상)인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (상)
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (상)Yoon Sup Choi
 
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (하)
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (하)인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (하)
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (하)Yoon Sup Choi
 
한국에서 혁신적인 디지털 헬스케어 스타트업이 탄생하려면
한국에서 혁신적인 디지털 헬스케어 스타트업이 탄생하려면한국에서 혁신적인 디지털 헬스케어 스타트업이 탄생하려면
한국에서 혁신적인 디지털 헬스케어 스타트업이 탄생하려면Yoon Sup Choi
 
의료의 미래, 디지털 헬스케어 + 의료 시장의 특성
의료의 미래, 디지털 헬스케어 + 의료 시장의 특성의료의 미래, 디지털 헬스케어 + 의료 시장의 특성
의료의 미래, 디지털 헬스케어 + 의료 시장의 특성Yoon Sup Choi
 
디지털 의료가 '의료'가 될 때 (1/2)
디지털 의료가 '의료'가 될 때 (1/2)디지털 의료가 '의료'가 될 때 (1/2)
디지털 의료가 '의료'가 될 때 (1/2)Yoon Sup Choi
 

Mehr von Yoon Sup Choi (20)

한국 원격의료 산업의 주요 이슈
한국 원격의료 산업의 주요 이슈한국 원격의료 산업의 주요 이슈
한국 원격의료 산업의 주요 이슈
 
원격의료 시대의 디지털 치료제
원격의료 시대의 디지털 치료제원격의료 시대의 디지털 치료제
원격의료 시대의 디지털 치료제
 
[KNAPS] 포스트 코로나 시대, 제약 산업과 디지털 헬스케어
[KNAPS] 포스트 코로나 시대, 제약 산업과 디지털 헬스케어[KNAPS] 포스트 코로나 시대, 제약 산업과 디지털 헬스케어
[KNAPS] 포스트 코로나 시대, 제약 산업과 디지털 헬스케어
 
디지털 헬스케어 파트너스 (DHP) 소개 자료
디지털 헬스케어 파트너스 (DHP) 소개 자료디지털 헬스케어 파트너스 (DHP) 소개 자료
디지털 헬스케어 파트너스 (DHP) 소개 자료
 
[대한병리학회] 의료 인공지능 101: 병리를 중심으로
[대한병리학회] 의료 인공지능 101: 병리를 중심으로[대한병리학회] 의료 인공지능 101: 병리를 중심으로
[대한병리학회] 의료 인공지능 101: 병리를 중심으로
 
한국 디지털 헬스케어의 생존을 위한 규제 혁신에 대한 고언
한국 디지털 헬스케어의 생존을 위한 규제 혁신에 대한 고언한국 디지털 헬스케어의 생존을 위한 규제 혁신에 대한 고언
한국 디지털 헬스케어의 생존을 위한 규제 혁신에 대한 고언
 
원격의료에 대한 생각, 그리고 그 생각에 대한 생각
원격의료에 대한 생각, 그리고 그 생각에 대한 생각원격의료에 대한 생각, 그리고 그 생각에 대한 생각
원격의료에 대한 생각, 그리고 그 생각에 대한 생각
 
[C&C] 의료의 미래 디지털 헬스케어
[C&C] 의료의 미래 디지털 헬스케어[C&C] 의료의 미래 디지털 헬스케어
[C&C] 의료의 미래 디지털 헬스케어
 
포스트 코로나 시대, 혁신적인 디지털 헬스케어 기업의 조건
포스트 코로나 시대, 혁신적인 디지털 헬스케어 기업의 조건포스트 코로나 시대, 혁신적인 디지털 헬스케어 기업의 조건
포스트 코로나 시대, 혁신적인 디지털 헬스케어 기업의 조건
 
디지털 치료제, 또 하나의 신약
디지털 치료제, 또 하나의 신약디지털 치료제, 또 하나의 신약
디지털 치료제, 또 하나의 신약
 
[365mc] 디지털 헬스케어: 의료의 미래
[365mc] 디지털 헬스케어: 의료의 미래[365mc] 디지털 헬스케어: 의료의 미래
[365mc] 디지털 헬스케어: 의료의 미래
 
디지털 치료제, 또 하나의 신약
디지털 치료제, 또 하나의 신약디지털 치료제, 또 하나의 신약
디지털 치료제, 또 하나의 신약
 
[ASGO 2019] Artificial Intelligence in Medicine
[ASGO 2019] Artificial Intelligence in Medicine[ASGO 2019] Artificial Intelligence in Medicine
[ASGO 2019] Artificial Intelligence in Medicine
 
글로벌 디지털 헬스케어 산업 및 규제 동향
글로벌 디지털 헬스케어 산업 및 규제 동향 글로벌 디지털 헬스케어 산업 및 규제 동향
글로벌 디지털 헬스케어 산업 및 규제 동향
 
인허가 이후에도 변화하는 AI/ML 기반 SaMD를 어떻게 규제할 것인가
인허가 이후에도 변화하는 AI/ML 기반 SaMD를 어떻게 규제할 것인가인허가 이후에도 변화하는 AI/ML 기반 SaMD를 어떻게 규제할 것인가
인허가 이후에도 변화하는 AI/ML 기반 SaMD를 어떻게 규제할 것인가
 
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (상)
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (상)인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (상)
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (상)
 
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (하)
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (하)인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (하)
인공지능은 의료를 어떻게 혁신하는가 (2019년 7월) (하)
 
한국에서 혁신적인 디지털 헬스케어 스타트업이 탄생하려면
한국에서 혁신적인 디지털 헬스케어 스타트업이 탄생하려면한국에서 혁신적인 디지털 헬스케어 스타트업이 탄생하려면
한국에서 혁신적인 디지털 헬스케어 스타트업이 탄생하려면
 
의료의 미래, 디지털 헬스케어 + 의료 시장의 특성
의료의 미래, 디지털 헬스케어 + 의료 시장의 특성의료의 미래, 디지털 헬스케어 + 의료 시장의 특성
의료의 미래, 디지털 헬스케어 + 의료 시장의 특성
 
디지털 의료가 '의료'가 될 때 (1/2)
디지털 의료가 '의료'가 될 때 (1/2)디지털 의료가 '의료'가 될 때 (1/2)
디지털 의료가 '의료'가 될 때 (1/2)
 

Kürzlich hochgeladen

Artifacts in Nuclear Medicine with Identifying and resolving artifacts.
Artifacts in Nuclear Medicine with Identifying and resolving artifacts.Artifacts in Nuclear Medicine with Identifying and resolving artifacts.
Artifacts in Nuclear Medicine with Identifying and resolving artifacts.MiadAlsulami
 
Housewife Call Girls Bangalore - Call 7001305949 Rs-3500 with A/C Room Cash o...
Housewife Call Girls Bangalore - Call 7001305949 Rs-3500 with A/C Room Cash o...Housewife Call Girls Bangalore - Call 7001305949 Rs-3500 with A/C Room Cash o...
Housewife Call Girls Bangalore - Call 7001305949 Rs-3500 with A/C Room Cash o...narwatsonia7
 
VIP Call Girls Lucknow Nandini 7001305949 Independent Escort Service Lucknow
VIP Call Girls Lucknow Nandini 7001305949 Independent Escort Service LucknowVIP Call Girls Lucknow Nandini 7001305949 Independent Escort Service Lucknow
VIP Call Girls Lucknow Nandini 7001305949 Independent Escort Service Lucknownarwatsonia7
 
Mumbai Call Girls Service 9910780858 Real Russian Girls Looking Models
Mumbai Call Girls Service 9910780858 Real Russian Girls Looking ModelsMumbai Call Girls Service 9910780858 Real Russian Girls Looking Models
Mumbai Call Girls Service 9910780858 Real Russian Girls Looking Modelssonalikaur4
 
Call Girls Jayanagar Just Call 7001305949 Top Class Call Girl Service Available
Call Girls Jayanagar Just Call 7001305949 Top Class Call Girl Service AvailableCall Girls Jayanagar Just Call 7001305949 Top Class Call Girl Service Available
Call Girls Jayanagar Just Call 7001305949 Top Class Call Girl Service Availablenarwatsonia7
 
VIP Call Girls Mumbai Arpita 9910780858 Independent Escort Service Mumbai
VIP Call Girls Mumbai Arpita 9910780858 Independent Escort Service MumbaiVIP Call Girls Mumbai Arpita 9910780858 Independent Escort Service Mumbai
VIP Call Girls Mumbai Arpita 9910780858 Independent Escort Service Mumbaisonalikaur4
 
Book Call Girls in Yelahanka - For 7001305949 Cheap & Best with original Photos
Book Call Girls in Yelahanka - For 7001305949 Cheap & Best with original PhotosBook Call Girls in Yelahanka - For 7001305949 Cheap & Best with original Photos
Book Call Girls in Yelahanka - For 7001305949 Cheap & Best with original Photosnarwatsonia7
 
Call Girl Service Bidadi - For 7001305949 Cheap & Best with original Photos
Call Girl Service Bidadi - For 7001305949 Cheap & Best with original PhotosCall Girl Service Bidadi - For 7001305949 Cheap & Best with original Photos
Call Girl Service Bidadi - For 7001305949 Cheap & Best with original Photosnarwatsonia7
 
Low Rate Call Girls Mumbai Suman 9910780858 Independent Escort Service Mumbai
Low Rate Call Girls Mumbai Suman 9910780858 Independent Escort Service MumbaiLow Rate Call Girls Mumbai Suman 9910780858 Independent Escort Service Mumbai
Low Rate Call Girls Mumbai Suman 9910780858 Independent Escort Service Mumbaisonalikaur4
 
Call Girls Thane Just Call 9910780858 Get High Class Call Girls Service
Call Girls Thane Just Call 9910780858 Get High Class Call Girls ServiceCall Girls Thane Just Call 9910780858 Get High Class Call Girls Service
Call Girls Thane Just Call 9910780858 Get High Class Call Girls Servicesonalikaur4
 
call girls in green park DELHI 🔝 >༒9540349809 🔝 genuine Escort Service 🔝✔️✔️
call girls in green park  DELHI 🔝 >༒9540349809 🔝 genuine Escort Service 🔝✔️✔️call girls in green park  DELHI 🔝 >༒9540349809 🔝 genuine Escort Service 🔝✔️✔️
call girls in green park DELHI 🔝 >༒9540349809 🔝 genuine Escort Service 🔝✔️✔️saminamagar
 
Housewife Call Girls Hoskote | 7001305949 At Low Cost Cash Payment Booking
Housewife Call Girls Hoskote | 7001305949 At Low Cost Cash Payment BookingHousewife Call Girls Hoskote | 7001305949 At Low Cost Cash Payment Booking
Housewife Call Girls Hoskote | 7001305949 At Low Cost Cash Payment Bookingnarwatsonia7
 
College Call Girls Vyasarpadi Whatsapp 7001305949 Independent Escort Service
College Call Girls Vyasarpadi Whatsapp 7001305949 Independent Escort ServiceCollege Call Girls Vyasarpadi Whatsapp 7001305949 Independent Escort Service
College Call Girls Vyasarpadi Whatsapp 7001305949 Independent Escort ServiceNehru place Escorts
 
call girls in Connaught Place DELHI 🔝 >༒9540349809 🔝 genuine Escort Service ...
call girls in Connaught Place  DELHI 🔝 >༒9540349809 🔝 genuine Escort Service ...call girls in Connaught Place  DELHI 🔝 >༒9540349809 🔝 genuine Escort Service ...
call girls in Connaught Place DELHI 🔝 >༒9540349809 🔝 genuine Escort Service ...saminamagar
 
Call Girls Hosur Just Call 7001305949 Top Class Call Girl Service Available
Call Girls Hosur Just Call 7001305949 Top Class Call Girl Service AvailableCall Girls Hosur Just Call 7001305949 Top Class Call Girl Service Available
Call Girls Hosur Just Call 7001305949 Top Class Call Girl Service Availablenarwatsonia7
 
High Profile Call Girls Jaipur Vani 8445551418 Independent Escort Service Jaipur
High Profile Call Girls Jaipur Vani 8445551418 Independent Escort Service JaipurHigh Profile Call Girls Jaipur Vani 8445551418 Independent Escort Service Jaipur
High Profile Call Girls Jaipur Vani 8445551418 Independent Escort Service Jaipurparulsinha
 
Russian Call Girls in Pune Riya 9907093804 Short 1500 Night 6000 Best call gi...
Russian Call Girls in Pune Riya 9907093804 Short 1500 Night 6000 Best call gi...Russian Call Girls in Pune Riya 9907093804 Short 1500 Night 6000 Best call gi...
Russian Call Girls in Pune Riya 9907093804 Short 1500 Night 6000 Best call gi...Miss joya
 
Asthma Review - GINA guidelines summary 2024
Asthma Review - GINA guidelines summary 2024Asthma Review - GINA guidelines summary 2024
Asthma Review - GINA guidelines summary 2024Gabriel Guevara MD
 
Russian Call Girls Chickpet - 7001305949 Booking and charges genuine rate for...
Russian Call Girls Chickpet - 7001305949 Booking and charges genuine rate for...Russian Call Girls Chickpet - 7001305949 Booking and charges genuine rate for...
Russian Call Girls Chickpet - 7001305949 Booking and charges genuine rate for...narwatsonia7
 
Call Girls Service in Bommanahalli - 7001305949 with real photos and phone nu...
Call Girls Service in Bommanahalli - 7001305949 with real photos and phone nu...Call Girls Service in Bommanahalli - 7001305949 with real photos and phone nu...
Call Girls Service in Bommanahalli - 7001305949 with real photos and phone nu...narwatsonia7
 

Kürzlich hochgeladen (20)

Artifacts in Nuclear Medicine with Identifying and resolving artifacts.
Artifacts in Nuclear Medicine with Identifying and resolving artifacts.Artifacts in Nuclear Medicine with Identifying and resolving artifacts.
Artifacts in Nuclear Medicine with Identifying and resolving artifacts.
 
Housewife Call Girls Bangalore - Call 7001305949 Rs-3500 with A/C Room Cash o...
Housewife Call Girls Bangalore - Call 7001305949 Rs-3500 with A/C Room Cash o...Housewife Call Girls Bangalore - Call 7001305949 Rs-3500 with A/C Room Cash o...
Housewife Call Girls Bangalore - Call 7001305949 Rs-3500 with A/C Room Cash o...
 
VIP Call Girls Lucknow Nandini 7001305949 Independent Escort Service Lucknow
VIP Call Girls Lucknow Nandini 7001305949 Independent Escort Service LucknowVIP Call Girls Lucknow Nandini 7001305949 Independent Escort Service Lucknow
VIP Call Girls Lucknow Nandini 7001305949 Independent Escort Service Lucknow
 
Mumbai Call Girls Service 9910780858 Real Russian Girls Looking Models
Mumbai Call Girls Service 9910780858 Real Russian Girls Looking ModelsMumbai Call Girls Service 9910780858 Real Russian Girls Looking Models
Mumbai Call Girls Service 9910780858 Real Russian Girls Looking Models
 
Call Girls Jayanagar Just Call 7001305949 Top Class Call Girl Service Available
Call Girls Jayanagar Just Call 7001305949 Top Class Call Girl Service AvailableCall Girls Jayanagar Just Call 7001305949 Top Class Call Girl Service Available
Call Girls Jayanagar Just Call 7001305949 Top Class Call Girl Service Available
 
VIP Call Girls Mumbai Arpita 9910780858 Independent Escort Service Mumbai
VIP Call Girls Mumbai Arpita 9910780858 Independent Escort Service MumbaiVIP Call Girls Mumbai Arpita 9910780858 Independent Escort Service Mumbai
VIP Call Girls Mumbai Arpita 9910780858 Independent Escort Service Mumbai
 
Book Call Girls in Yelahanka - For 7001305949 Cheap & Best with original Photos
Book Call Girls in Yelahanka - For 7001305949 Cheap & Best with original PhotosBook Call Girls in Yelahanka - For 7001305949 Cheap & Best with original Photos
Book Call Girls in Yelahanka - For 7001305949 Cheap & Best with original Photos
 
Call Girl Service Bidadi - For 7001305949 Cheap & Best with original Photos
Call Girl Service Bidadi - For 7001305949 Cheap & Best with original PhotosCall Girl Service Bidadi - For 7001305949 Cheap & Best with original Photos
Call Girl Service Bidadi - For 7001305949 Cheap & Best with original Photos
 
Low Rate Call Girls Mumbai Suman 9910780858 Independent Escort Service Mumbai
Low Rate Call Girls Mumbai Suman 9910780858 Independent Escort Service MumbaiLow Rate Call Girls Mumbai Suman 9910780858 Independent Escort Service Mumbai
Low Rate Call Girls Mumbai Suman 9910780858 Independent Escort Service Mumbai
 
Call Girls Thane Just Call 9910780858 Get High Class Call Girls Service
Call Girls Thane Just Call 9910780858 Get High Class Call Girls ServiceCall Girls Thane Just Call 9910780858 Get High Class Call Girls Service
Call Girls Thane Just Call 9910780858 Get High Class Call Girls Service
 
call girls in green park DELHI 🔝 >༒9540349809 🔝 genuine Escort Service 🔝✔️✔️
call girls in green park  DELHI 🔝 >༒9540349809 🔝 genuine Escort Service 🔝✔️✔️call girls in green park  DELHI 🔝 >༒9540349809 🔝 genuine Escort Service 🔝✔️✔️
call girls in green park DELHI 🔝 >༒9540349809 🔝 genuine Escort Service 🔝✔️✔️
 
Housewife Call Girls Hoskote | 7001305949 At Low Cost Cash Payment Booking
Housewife Call Girls Hoskote | 7001305949 At Low Cost Cash Payment BookingHousewife Call Girls Hoskote | 7001305949 At Low Cost Cash Payment Booking
Housewife Call Girls Hoskote | 7001305949 At Low Cost Cash Payment Booking
 
College Call Girls Vyasarpadi Whatsapp 7001305949 Independent Escort Service
College Call Girls Vyasarpadi Whatsapp 7001305949 Independent Escort ServiceCollege Call Girls Vyasarpadi Whatsapp 7001305949 Independent Escort Service
College Call Girls Vyasarpadi Whatsapp 7001305949 Independent Escort Service
 
call girls in Connaught Place DELHI 🔝 >༒9540349809 🔝 genuine Escort Service ...
call girls in Connaught Place  DELHI 🔝 >༒9540349809 🔝 genuine Escort Service ...call girls in Connaught Place  DELHI 🔝 >༒9540349809 🔝 genuine Escort Service ...
call girls in Connaught Place DELHI 🔝 >༒9540349809 🔝 genuine Escort Service ...
 
Call Girls Hosur Just Call 7001305949 Top Class Call Girl Service Available
Call Girls Hosur Just Call 7001305949 Top Class Call Girl Service AvailableCall Girls Hosur Just Call 7001305949 Top Class Call Girl Service Available
Call Girls Hosur Just Call 7001305949 Top Class Call Girl Service Available
 
High Profile Call Girls Jaipur Vani 8445551418 Independent Escort Service Jaipur
High Profile Call Girls Jaipur Vani 8445551418 Independent Escort Service JaipurHigh Profile Call Girls Jaipur Vani 8445551418 Independent Escort Service Jaipur
High Profile Call Girls Jaipur Vani 8445551418 Independent Escort Service Jaipur
 
Russian Call Girls in Pune Riya 9907093804 Short 1500 Night 6000 Best call gi...
Russian Call Girls in Pune Riya 9907093804 Short 1500 Night 6000 Best call gi...Russian Call Girls in Pune Riya 9907093804 Short 1500 Night 6000 Best call gi...
Russian Call Girls in Pune Riya 9907093804 Short 1500 Night 6000 Best call gi...
 
Asthma Review - GINA guidelines summary 2024
Asthma Review - GINA guidelines summary 2024Asthma Review - GINA guidelines summary 2024
Asthma Review - GINA guidelines summary 2024
 
Russian Call Girls Chickpet - 7001305949 Booking and charges genuine rate for...
Russian Call Girls Chickpet - 7001305949 Booking and charges genuine rate for...Russian Call Girls Chickpet - 7001305949 Booking and charges genuine rate for...
Russian Call Girls Chickpet - 7001305949 Booking and charges genuine rate for...
 
Call Girls Service in Bommanahalli - 7001305949 with real photos and phone nu...
Call Girls Service in Bommanahalli - 7001305949 with real photos and phone nu...Call Girls Service in Bommanahalli - 7001305949 with real photos and phone nu...
Call Girls Service in Bommanahalli - 7001305949 with real photos and phone nu...
 

Digital Future of the Surgery: Brining the Innovation of Digital Technology into Operating Room

  • 1. Sungkyunkwan University Department of Human ICT Convergence Yoon Sup Choi, Ph.D. Digital Future of the Surgery : how to bring the innovation of digital technology into the operating room
  • 2. The Convergence of IT, BT and Medicine
  • 3.
  • 5. Digital Future of the Surgery • Wearable Devices • Augmented Reality • Artificial Intelligence • 3D Printings
  • 8.
  • 9. Google Glass How-to: Getting Started
  • 10. • Technically fancy device, but skepticisms on usability • Most functions can be achievable with smartphones • Necessary to find specific use case, which cannot be done with smartphone
  • 12. #ifihadglass project In February 2013, 1,000 Glass Explorers were selected as a beta testers.
  • 13. Jun 16. 2014 3 out of 5 Glass Certified Partners by Glass at Work develop applications in medicine/healthcare
  • 14. 1.Ambulance • checking medical histories from EMR • sharing data / communication with ER
  • 15. 2. ER • uploading EMR data with dictation / video recording • consulting with specialists with sharing video in real time
  • 16. 3. Examination Room • uploading EMR data with dictation / video recording • improve patients-doctor relationships
  • 22. • While performing surgery, he used Google Glass to compare patient’s CT scan. • Google Glass doesn’t distract, like driving a car and glancing in the rearview mirror. Dr. Pierre Theodore, a cardiothoracic surgeon at UCSF Medical Center August 2013 “It was extraordinarily helpful.”
  • 23. • Consult with a distant colleague using live video from the OR via Google Glass • Live streamed to the laptop of the medical school students Dr. Dr. Christopher Kaeding, Ohio State University Wexner Medical Center August 2013 US doctor performs first live Google Glass surgery 
  • 24.  Ohio State University Wexner Medical Center의 Dr. Christopher Ceding • Consult with a distant colleague using live video from the OR via Google Glass • Live streamed to the laptop of the medical school students August 2013 US doctor performs first live Google Glass surgery 
  • 25. UC Irvine School of Medicine first to integrate Google Glass into curriculum 2014. 4. UC Irvine School of Medicine is taking steps to become the first in the nation to integrate the wearable computer into its four-year curriculum – from first- and second-year anatomy courses and clinical skills training to third- and fourth-year hospital rotations.
  • 26. Google Glass enters operating room at Stanford 2014. 7. Stanford University Medical Center’s Department of Cardiothoracic Surgery has started using Google Glass in its resident training program. While a resident is operating on a patient, surgeons can use the CrowdOptic software to watch the resident’s progress and send visual feedback to the resident on technique.
  • 28. Augmented Reality Augmented Reality is a technology enriching the real world with digital information and media, such as 3D models and videos, overlaying in real-time the camera view of your smartphone, tablet, PC or connected glasses.
  • 29. Extreme future of AR? http://gencept.com/sight-an-8-minute-augmented-reality-journey-video
  • 31. 3D modeling and visualization of anatomical or pathological structures in the medical image
  • 32. 3D modeling and visualization of anatomical or pathological structures in the medical image
  • 33. VR Render: 3D image reconstruction guidance in surgery https://www.youtube.com/watch?v=JJtiBA24Snc
  • 34. • Surgical planning • Training • Share information with patient / other practitioners • Intraoperative guidance
  • 35. https://www.youtube.com/watch?v=xedzYSAT8S4 Augmented Reality: superimposing the preoperative 3D patient modeling onto the real intraoperative view of the patient • Identify the location of metastasized tumors in the organs
  • 36. https://www.youtube.com/watch?v=xedzYSAT8S4 Augmented Reality: superimposing the preoperative 3D patient modeling onto the real intraoperative view of the patient
  • 37. VIPAAR provides real-time, two-way, interactive video conferencing
  • 38.
  • 39. VIPAAR: Remote Surgery Support UsingVIPAAR, a remote surgeon is able to put his or her hands into the surgical field and provide collaboration and assistance.
  • 44.
  • 45. Jeopardy! IBM Watson defeated two human champions in Jepoardy! in 2011
  • 47. 600,000 pieces of medical evidence 2 million pages of text from 42 medical journals and clinical trials 69 guidelines, 61,540 clinical trials IBM Watson on Medicine Watson learned... + 1,500 lung cancer cases physician notes, lab results and clinical research + 14,700 hours of hands-on training
  • 48. • Treatment plans suggestions with confidence level • Evidences behind the suggestions: articles, best practices, guidelines • Suggestion of eligible clinical trials
  • 49. IBM Watson in Korea? 2015.7.9. SNUH
  • 50. DeepFace: Closing the Gap to Human-Level Performance in FaceVerification Taigman,Y. et al. (2014). DeepFace: Closing the Gap to Human-Level Performance in FaceVerification, CVPR’14. Figure 2. Outline of the DeepFace architecture. A front-end of a single convolution-pooling-convolution filtering on the rectified input, followed by three locally-connected layers and two fully-connected layers. Colors illustrate feature maps produced at each layer. The net includes more than 120 million parameters, where more than 95% come from the local and fully connected layers. very few parameters. These layers merely expand the input into a set of simple local features. The subsequent layers (L4, L5 and L6) are instead lo- cally connected [13, 16], like a convolutional layer they ap- ply a filter bank, but every location in the feature map learns a different set of filters. Since different regions of an aligned image have different local statistics, the spatial stationarity The goal of training is to maximize the probability of the correct class (face id). We achieve this by minimiz- ing the cross-entropy loss for each training sample. If k is the index of the true label for a given input, the loss is: L = log pk. The loss is minimized over the parameters by computing the gradient of L w.r.t. the parameters and by updating the parameters using stochastic gradient de- Human: 95% vs. DeepFace in Facebook: 97.35% Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people)
  • 51. FaceNet:A Unified Embedding for Face Recognition and Clustering Schroff, F. et al. (2015). FaceNet:A Unified Embedding for Face Recognition and Clustering Human: 95% vs. FaceNet of Google: 99.63% Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people) False accept False reject s. This shows all pairs of images that were on LFW. Only eight of the 13 errors shown the other four are mislabeled in LFW. on Youtube Faces DB ge similarity of all pairs of the first one our face detector detects in each video. False accept False reject Figure 6. LFW errors. This shows all pairs of images that were incorrectly classified on LFW. Only eight of the 13 errors shown here are actual errors the other four are mislabeled in LFW. 5.7. Performance on Youtube Faces DB We use the average similarity of all pairs of the first one hundred frames that our face detector detects in each video. This gives us a classification accuracy of 95.12%±0.39. Using the first one thousand frames results in 95.18%. Compared to [17] 91.4% who also evaluate one hundred frames per video we reduce the error rate by almost half. DeepId2+ [15] achieved 93.2% and our method reduces this error by 30%, comparable to our improvement on LFW. 5.8. Face Clustering Our compact embedding lends itself to be used in order to cluster a users personal photos into groups of people with the same identity. The constraints in assignment imposed by clustering faces, compared to the pure verification task, lead to truly amazing results. Figure 7 shows one cluster in a users personal photo collection, generated using agglom- erative clustering. It is a clear showcase of the incredible invariance to occlusion, lighting, pose and even age. Figure 7. Face Clustering. Shown is an exemplar cluster for one user. All these images in the users personal photo collection were clustered together. 6. Summary We provide a method to directly learn an embedding into an Euclidean space for face verification. This sets it apart from other methods [15, 17] who use the CNN bottleneck layer, or require additional post-processing such as concate- nation of multiple models and PCA, as well as SVM clas- sification. Our end-to-end training both simplifies the setup and shows that directly optimizing a loss relevant to the task at hand improves performance. Another strength of our model is that it only requires False accept False reject Figure 6. LFW errors. This shows all pairs of images that were incorrectly classified on LFW. Only eight of the 13 errors shown here are actual errors the other four are mislabeled in LFW. 5.7. Performance on Youtube Faces DB We use the average similarity of all pairs of the first one hundred frames that our face detector detects in each video. This gives us a classification accuracy of 95.12%±0.39. Using the first one thousand frames results in 95.18%. Compared to [17] 91.4% who also evaluate one hundred frames per video we reduce the error rate by almost half. DeepId2+ [15] achieved 93.2% and our method reduces this error by 30%, comparable to our improvement on LFW. 5.8. Face Clustering Our compact embedding lends itself to be used in order to cluster a users personal photos into groups of people with the same identity. The constraints in assignment imposed by clustering faces, compared to the pure verification task, Figure 7. Face Clustering. Shown is an exemplar cluster for one user. All these images in the users personal photo collection were clustered together. 6. Summary We provide a method to directly learn an embedding into an Euclidean space for face verification. This sets it apart from other methods [15, 17] who use the CNN bottleneck layer, or require additional post-processing such as concate- nation of multiple models and PCA, as well as SVM clas-
  • 52. Business Area Medical Image Analysis VUNOnet and our machine learning technology will help doctors and hospitals manage medical scans and images intelligently to make diagnosis faster and more accurately. Original Image Automatic Segmentation EmphysemaNormal ReticularOpacity Our system finds DILDs at the highest accuracy * DILDs: Diffuse Interstitial Lung Disease Digital Radiologist
  • 53. Digital Radiologist Med Phys. 2013 May;40(5):051912. doi: 10.1118/1.4802214.
  • 54. Constructing higher-level contextual/relational features: Relationships between epithelial nuclear neighbors Relationships between morphologically regular and irregular nuclei Relationships between epithelial and stromal objects Relationships between epithelial nuclei and cytoplasm Characteristics of stromal nuclei and stromal matrix Characteristics of epithelial nuclei and epithelial cytoplasm Building an epithelial/stromal classifier: Epithelial vs.stroma classifier Epithelial vs.stroma classifier B Basic image processing and feature construction: H&E image Image broken into superpixels Nuclei identified within each superpixel A Relationships of contiguous epithelial regions with underlying nuclear objects Learning an image-based model to predict survival Processed images from patients Processed images from patients C D onNovember17,2011stm.sciencemag.orgwnloadedfrom TMAs contain 0.6-mm-diameter cores (median of two cores per case) that represent only a small sample of the full tumor. We acquired data from two separate and independent cohorts: Nether- lands Cancer Institute (NKI; 248 patients) and Vancouver General Hospital (VGH; 328 patients). Unlike previous work in cancer morphom- etry (18–21), our image analysis pipeline was not limited to a predefined set of morphometric features selected by pathologists. Rather, C-Path measures an extensive, quantitative feature set from the breast cancer epithelium and the stro- ma (Fig. 1). Our image processing system first performed an automated, hierarchical scene seg- mentation that generated thousands of measure- ments, including both standard morphometric descriptors of image objects and higher-level contextual, relational, and global image features. The pipeline consisted of three stages (Fig. 1, A to C, and tables S8 and S9). First, we used a set of processing steps to separate the tissue from the background, partition the image into small regions of coherent appearance known as superpixels, find nuclei within the superpixels, and construct Constructing higher-level contextual/relational features: Relationships between epithelial nuclear neighbors Relationships between morphologically regular and irregular nuclei Relationships between epithelial and stromal objects Relationships between epithelial nuclei and cytoplasm Characteristics of stromal nuclei and stromal matrix Characteristics of epithelial nuclei and epithelial cytoplasm Epithelial vs.stroma classifier Epithelial vs.stroma classifier Relationships of contiguous epithelial regions with underlying nuclear objects Learning an image-based model to predict survival Processed images from patients alive at 5 years Processed images from patients deceased at 5 years L1-regularized logisticregression modelbuilding 5YS predictive model Unlabeled images Time P(survival) C D Identification of novel prognostically important morphologic features basic cellular morphologic properties (epithelial reg- ular nuclei = red; epithelial atypical nuclei = pale blue; epithelial cytoplasm = purple; stromal matrix = green; stromal round nuclei = dark green; stromal spindled nuclei = teal blue; unclassified regions = dark gray; spindled nuclei in unclassified regions = yellow; round nuclei in unclassified regions = gray; background = white). (Left panel) After the classification of each image object, a rich feature set is constructed. (D) Learning an image-based model to predict survival. Processed images from patients alive at 5 years after surgery and from patients deceased at 5 years after surgery were used to construct an image-based prog- nostic model. After construction of the model, it was applied to a test set of breast cancer images (not used in model building) to classify patients as high or low risk of death by 5 years. www.ScienceTranslationalMedicine.org 9 November 2011 Vol 3 Issue 108 108ra113 2 onNovember17,2011stm.sciencemag.orgDownloadedfrom Digital Pathologist Sci Transl Med. 2011 Nov 9;3(108):108ra113 A comprehensive analysis of automatically quantitated morphological features could identify characteristics of prognostic relevance and provide an accurate and reproducible means for assessing prognosis from microscopic image data.
  • 55. Digital Pathologist Sci Transl Med. 2011 Nov 9;3(108):108ra113 Top stromal features associated with survival. primarily characterizing epithelial nuclear characteristics, such as size, color, and texture (21, 36). In contrast, after initial filtering of im- ages to ensure high-quality TMA images and training of the C-Path models using expert-derived image annotations (epithelium and stroma labels to build the epithelial-stromal classifier and survival time and survival status to build the prognostic model), our image analysis system is automated with no manual steps, which greatly in- creases its scalability. Additionally, in contrast to previous approaches, our system measures thousands of morphologic descriptors of diverse identification of prognostic features whose significance was not pre- viously recognized. Using our system, we built an image-based prognostic model on the NKI data set and showed that in this patient cohort the model was a strong predictor of survival and provided significant additional prognostic information to clinical, molecular, and pathological prog- nostic factors in a multivariate model. We also demonstrated that the image-based prognostic model, built using the NKI data set, is a strong prognostic factor on another, independent data set with very different SD of the ratio of the pixel intensity SD to the mean intensity for pixels within a ring of the center of epithelial nuclei A The sum of the number of unclassified objects SD of the maximum blue pixel value for atypical epithelial nuclei Maximum distance between atypical epithelial nuclei B C D Maximum value of the minimum green pixel intensity value in epithelial contiguous regions Minimum elliptic fit of epithelial contiguous regions SD of distance between epithelial cytoplasmic and nuclear objects Average border between epithelial cytoplasmic objects E F G H Fig. 5. Top epithelial features. The eight panels in the figure (A to H) each shows one of the top-ranking epithelial features from the bootstrap anal- ysis. Left panels, improved prognosis; right panels, worse prognosis. (A) SD of the (SD of intensity/mean intensity) for pixels within a ring of the center of epithelial nuclei. Left, relatively consistent nuclear intensity pattern (low score); right, great nuclear intensity diversity (high score). (B) Sum of the number of unclassified objects. Red, epithelial regions; green, stromal re- gions; no overlaid color, unclassified region. Left, few unclassified objects (low score); right, higher number of unclassified objects (high score). (C) SD of the maximum blue pixel value for atypical epithelial nuclei. Left, high score; right, low score. (D) Maximum distance between atypical epithe- lial nuclei. Left, high score; right, low score. (Insets) Red, atypical epithelial nuclei; black, typical epithelial nuclei. (E) Minimum elliptic fit of epithelial contiguous regions. Left, high score; right, low score. (F) SD of distance between epithelial cytoplasmic and nuclear objects. Left, high score; right, low score. (G) Average border between epithelial cytoplasmic objects. Left, high score; right, low score. (H) Maximum value of the minimum green pixel intensity value in epithelial contiguous regions. Left, low score indi- cating black pixels within epithelial region; right, higher score indicating presence of epithelial regions lacking black pixels. onNovember17,2011stm.sciencemag.orgDownloadedfrom and stromal matrix throughout the image, with thin cords of epithe- lial cells infiltrating through stroma across the image, so that each stromal matrix region borders a relatively constant proportion of ep- ithelial and stromal regions. The stromal feature with the second largest coefficient (Fig. 4B) was the sum of the minimum green in- tensity value of stromal-contiguous regions. This feature received a value of zero when stromal regions contained dark pixels (such as inflammatory nuclei). The feature received a positive value when stromal objects were devoid of dark pixels. This feature provided in- formation about the relationship between stromal cellular composi- tion and prognosis and suggested that the presence of inflammatory cells in the stroma is associated with poor prognosis, a finding con- sistent with previous observations (32). The third most significant stromal feature (Fig. 4C) was a measure of the relative border between spindled stromal nuclei to round stromal nuclei, with an increased rel- ative border of spindled stromal nuclei to round stromal nuclei asso- ciated with worse overall survival. Although the biological underpinning of this morphologic feature is currently not known, this analysis sug- gested that spatial relationships between different populations of stro- mal cell types are associated with breast cancer progression. Reproducibility of C-Path 5YS model predictions on samples with multiple TMA cores For the C-Path 5YS model (which was trained on the full NKI data set), we assessed the intrapatient agreement of model predictions when predictions were made separately on each image contributed by pa- tients in the VGH data set. For the 190 VGH patients who contributed two images with complete image data, the binary predictions (high or low risk) on the individual images agreed with each other for 69% (131 of 190) of the cases and agreed with the prediction on the aver- aged data for 84% (319 of 380) of the images. Using the continuous prediction score (which ranged from 0 to 100), the median of the ab- solute difference in prediction score among the patients with replicate images was 5%, and the Spearman correlation among replicates was 0.27 (P = 0.0002) (fig. S3). This degree of intrapatient agreement is only moderate, and these findings suggest significant intrapatient tumor heterogeneity, which is a cardinal feature of breast carcinomas (33–35). Qualitative visual inspection of images receiving discordant scores suggested that intrapatient variability in both the epithelial and the stromal components is likely to contribute to discordant scores for the individual images. These differences appeared to relate both to the proportions of the epithelium and stroma and to the appearance of the epithelium and stroma. Last, we sought to analyze whether sur- vival predictions were more accurate on the VGH cases that contributed multiple cores compared to the cases that contributed only a single core. This analysis showed that the C-Path 5YS model showed signif- icantly improved prognostic prediction accuracy on the VGH cases for which we had multiple images compared to the cases that con- tributed only a single image (Fig. 7). Together, these findings show a significant degree of intrapatient variability and indicate that increased tumor sampling is associated with improved model performance. DISCUSSION Heat map of stromal matrix objects mean abs.diff to neighbors H&E image separated into epithelial and stromal objects A B C Worse prognosis Improved prognosis Improved prognosis Improved prognosis Worse prognosis Worse prognosis Fig. 4. Top stromal features associated with survival. (A) Variability in ab- solute difference in intensity between stromal matrix regions and neigh- bors. Top panel, high score (24.1); bottom panel, low score (10.5). (Insets) Top panel, high score; bottom panel; low score. Right panels, stromal matrix objects colored blue (low), green (medium), or white (high) according to each object’s absolute difference in intensity to neighbors. (B) Presence R E S E A R C H A R T I C L E onNovember17,2011stm.sciencemag.orgDownloadedfrom Top epithelial features.The eight panels in the figure (A to H) each shows one of the top-ranking epithelial features from the bootstrap anal- ysis. Left panels, improved prognosis; right panels, worse prognosis.
  • 56. GaussSurgical: Estimation of Blood Loss in Surgery with iPad Camera
  • 57.
  • 58. 1. Surgical Sponge (Pixel App) FDA 510 (k) Clearance in 2012
  • 59. 2. Suction Container (Triton App) FDA 510 (k) Clearance in March 2015
  • 60.
  • 61.
  • 63.
  • 65. • 3D object is constructed by adding material in layers (usually sprayed) • Materials: rubber, plastics, paper, polyurethane, metals, and even cells 3D printers: Replicators in the real world
  • 66.
  • 67. ‘Liberator’, the 3D Printed Gun
  • 68. $25, 3-D printed handgun
  • 71.
  • 73. http://www.telegraph.co.uk/technology/news/9066721/3D-printer-builds-new-jaw-bone-for-transplant.html • The artificial jaw was made from titanium powder, heated and built-up in layers in a 3D printer to create a working lower jaw which was then finished with a bioceramic coating. • The implant was fitted in an operation in the Netherlands in June 2011. 3D Printed Jaw
  • 75.
  • 77. Bioresorbable Airway Splint Created with a Three-Dimensional Printer
  • 78. Bioresorbable Airway Splint Created with a Three-Dimensional Printer N Engl J Med 2013; 368:2043-2045 • A custom-designed and custom-fabricated resorbable airway splint, which was manufactured from polycaprolactone with the use of a 3D printer • Our bellowed topology design provides resistance against collapse while simultaneously allowing flexion, extension, and expansion with growth.
  • 79. N Engl J Med 2013; 368:2043-2045 One year after surgery, imaging and endoscopy showed a patent left mainstem bronchus
  • 80. Morrison RJ et al. Sci Transl Med. 2015 Fig. 1. Computational image- based design of 3D-printed tracheo- bronchialsplints.(A)Stereolithography (.STL) representation (top) and virtual rendering (bottom) of the tracheo- bronchial splint demonstrating the bounded design parameters of the device. We used a fixed open angle of 90° to allow placement of the de- vice over the airway. Inner diameter, length, wall thickness, and number and spacing of suture holes were adjusted according to patient anato- my (Table 1) and can be adjusted on the submillimeter scale. Bellow height and periodicity (ribbing) can be adjusted to allow additional flexion of the device in the z axis. (B) Mecha- nismofactionofthetracheobronchial splint intreatingtracheobronchialcol- lapse in TBM. Solid arrows denote positive intrathoracic pressure gener- ated on expiration. Hollow arrow de- notes vector of tracheobronchial collapse. Dashed arrow denotes vector of opening wedge displace- ment of the tracheobronchial splint with airway growth. (C) Digital Imag- ingandCommunicationsinMedicine (DICOM) images of the patient’s CT scan were used to generate a 3D model of the patient’s airway via seg- mentation in Mimics. A centerline was fit within the affected segment of the airway, and measurements of airway hydraulic diameter (DH) and length were used as design param- eters to generate the device design. (D) Design parameters were input into MATLAB to generate an output as a series of 2D. TIFF image slices using Fourier series representation. Light and gray areas indicate struc- tural components; dark areas are voids. The top image demonstrates a device bellow, and the bottom image demonstrates suture holes in- corporated into the device design. The .TIFF images were imported into Mimics to generate an. STL of the final splint design. (E) Virtual assessment of fit of tracheobronchial splint over segmented primary airway model for all patients. (F) Final 3D-printed PCL tracheobronchial splint used to treat the left bronchus of patient 2. The splint incorporated a 90° spiral to the open angle of the device to accommodate concurrent use of a right bronchial splint and growth of the right bronchus. R E S E A R C H A R T I C L E Mitigation of tracheobronchomalacia with 3D-printed personalized medical devices in pediatric patients
  • 81. DISCUSSION We report successful implantation of 3D-printed, patient-specific bio- resorbable airway splints for treatment of severe TBM. The personalized splints conformed to the patients’ individual geometries and expanded compression) (20). Thus, we defined our maximum compressive allow- ance as less than 50% deformation under a 20-N load. However, a sim- ilar degree of bending compliance was too low for the splint to be effective at maintaining airway patency. We expected that under a 20-N load, the splint should allow greater than 20% displacement in bending to accommodate flexion of the airway but less than 50% displacement (greater than which may interrupt airflow). Fig. 2. Pre- and postoperative imaging of patients. Black arrrows in all figures denote location of the malcic segment of the airway. White arrows designate the location/presence of the tracheo- bronchial splint. Asterisk denotes focal degradation of splint. All CT images are coronal minimum intensity projection (MinIP) reformatted images of the lung and airway on expiration. All MRI images are axial proton density turbospin echo MRI images of the chest. (A) Preoperative (top) and 1-month postoperative (upper middle) CT images of patient 1. Postoperative MRI (lower middle) demonstrated presence of splint around left bronchus in patient 1 at 12 months and focal fragmentation of splint due to degradation at 39 months (bottom). (B) Preoperative (top) and 1-month postoperative (upper mid- dle) CT images of patient 2. Postoperative MRI (lower middle) demonstrated presence of splints around the left and right bronchi in patient 2 at 1 month. Note that the patient had bilateral mainstem bronchomalacia and received a tracheobronchial splint on both the left and right mainstem bronchus. (C) Preoperative (top) and 1-month postoperative (bottom) CT images of patient 3. www.ScienceTranslationalMedicine.org 29 April 2015 Vol 7 Issue 285 285ra64 5 Morrison RJ et al. Sci Transl Med. 2015 Mitigation of tracheobronchomalacia with 3D-printed personalized medical devices in pediatric patients
  • 82. pressure (table S2). Patient airway image–based computational design coupled with 3D printing allowed rapid production of these devices. The regulatory approval process and evaluation of patient candidacy needed 7 days. All devices were completed within this time frame. Design and MATERIALS AND METHODS Study design Our hypothesis was that an external splint could be designed to obtain Fig. 4. Mean airway caliber over time. Patient airway DH was measured over time after implantation of the 3D-printed bioresorbable material. Solid lines denote bronchi that received the tracheobronchial splint. Dashed lines are normal, contralateral bronchi for patients 1 and 3. All caliber measurements were made on expiratory-phase CT imaging using the centerline function of each isolated bronchus in Mimics. The centerline function measures DH every 0.1 to 1.0 mm along the entire segment of the isolated model. Measurements are represented as averages of all measurements along the length of the isolated affected bronchus model ± SD. Pre-op, preoperative. R E S E A R C H A R T I C L E Morrison RJ et al. Sci Transl Med. 2015 Mitigation of tracheobronchomalacia with 3D-printed personalized medical devices in pediatric patients
  • 83. 3D Printed Skull • A 22-year-old female from the Netherlands • A chronic bone disorder, which has increased the thickness of her skull from 1.5 to 5cm causing reduced eyesight and severe headaches. • Top section of skull was removed and replaced with a 3D printed implant. March 2014
  • 84. 3D Printed Skull • Since the operation, the patient has gained her sight back entirely, is symptom-free and back to work. March 2014
  • 85. by prof. Hyung Jin Choi (SNU) 3D printers for the anatomy education You cannot physically touch the 3D simulated models
  • 86. by prof. Hyung Jin Choi (SNU) 3D printers for the anatomy education
  • 87. by prof. Hyung Jin Choi (SNU) 3D printers for anatomy education
  • 88. Digital Future of the Surgery • Wearable Devices • Augmented Reality • Artificial Intelligence • 3D Printings
  • 89.
  • 90. Feedback/Questions • Email: yoonsup.choi@gmail.com • Blog: http://www.yoonsupchoi.com • Facebook: Yoon Sup Choi