Call Girls Service in Bommanahalli - 7001305949 with real photos and phone nu...
Digital Future of the Surgery: Brining the Innovation of Digital Technology into Operating Room
1. Sungkyunkwan University
Department of Human ICT Convergence
Yoon Sup Choi, Ph.D.
Digital Future of the Surgery
: how to bring the innovation of digital technology into the operating room
10. • Technically fancy device, but skepticisms on usability
• Most functions can be achievable with smartphones
• Necessary to find specific use case, which cannot be done with smartphone
22. • While performing surgery, he used Google Glass to compare patient’s CT scan.
• Google Glass doesn’t distract, like driving a car and glancing in the rearview mirror.
Dr. Pierre Theodore, a cardiothoracic surgeon at UCSF Medical Center
August 2013
“It was extraordinarily helpful.”
23. • Consult with a distant colleague using live video from the OR via Google Glass
• Live streamed to the laptop of the medical school students
Dr. Dr. Christopher Kaeding, Ohio State University Wexner Medical Center
August 2013
US doctor performs first live Google Glass surgery
24. Ohio State University Wexner Medical Center의 Dr. Christopher Ceding
• Consult with a distant colleague using live video from the OR via Google Glass
• Live streamed to the laptop of the medical school students
August 2013
US doctor performs first live Google Glass surgery
25. UC Irvine School of Medicine first to
integrate Google Glass into curriculum
2014. 4.
UC Irvine School of Medicine is taking steps
to become the first in the nation to
integrate the wearable computer into its
four-year curriculum – from first- and
second-year anatomy courses and clinical
skills training to third- and fourth-year
hospital rotations.
26. Google Glass enters
operating room at Stanford
2014. 7.
Stanford University Medical
Center’s Department of Cardiothoracic
Surgery has started using Google Glass
in its resident training program.
While a resident is operating on a
patient, surgeons can use the
CrowdOptic software to watch the
resident’s progress and send visual
feedback to the resident on technique.
28. Augmented Reality
Augmented Reality is a technology enriching the real world with digital information and media,
such as 3D models and videos, overlaying in real-time the camera view
of your smartphone, tablet, PC or connected glasses.
29. Extreme future of AR?
http://gencept.com/sight-an-8-minute-augmented-reality-journey-video
39. VIPAAR: Remote Surgery Support
UsingVIPAAR, a remote surgeon is able to put his or her hands
into the surgical field and provide collaboration and assistance.
47. 600,000 pieces of medical evidence
2 million pages of text from 42 medical journals and clinical trials
69 guidelines, 61,540 clinical trials
IBM Watson on Medicine
Watson learned...
+
1,500 lung cancer cases
physician notes, lab results and clinical research
+
14,700 hours of hands-on training
48. • Treatment plans suggestions with confidence level
• Evidences behind the suggestions: articles, best practices, guidelines
• Suggestion of eligible clinical trials
50. DeepFace: Closing the Gap to Human-Level
Performance in FaceVerification
Taigman,Y. et al. (2014). DeepFace: Closing the Gap to Human-Level Performance in FaceVerification, CVPR’14.
Figure 2. Outline of the DeepFace architecture. A front-end of a single convolution-pooling-convolution filtering on the rectified input, followed by three
locally-connected layers and two fully-connected layers. Colors illustrate feature maps produced at each layer. The net includes more than 120 million
parameters, where more than 95% come from the local and fully connected layers.
very few parameters. These layers merely expand the input
into a set of simple local features.
The subsequent layers (L4, L5 and L6) are instead lo-
cally connected [13, 16], like a convolutional layer they ap-
ply a filter bank, but every location in the feature map learns
a different set of filters. Since different regions of an aligned
image have different local statistics, the spatial stationarity
The goal of training is to maximize the probability of
the correct class (face id). We achieve this by minimiz-
ing the cross-entropy loss for each training sample. If k
is the index of the true label for a given input, the loss is:
L = log pk. The loss is minimized over the parameters
by computing the gradient of L w.r.t. the parameters and
by updating the parameters using stochastic gradient de-
Human: 95% vs. DeepFace in Facebook: 97.35%
Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people)
51. FaceNet:A Unified Embedding for Face
Recognition and Clustering
Schroff, F. et al. (2015). FaceNet:A Unified Embedding for Face Recognition and Clustering
Human: 95% vs. FaceNet of Google: 99.63%
Recognition Accuracy for Labeled Faces in the Wild (LFW) dataset (13,233 images, 5,749 people)
False accept
False reject
s. This shows all pairs of images that were
on LFW. Only eight of the 13 errors shown
the other four are mislabeled in LFW.
on Youtube Faces DB
ge similarity of all pairs of the first one
our face detector detects in each video.
False accept
False reject
Figure 6. LFW errors. This shows all pairs of images that were
incorrectly classified on LFW. Only eight of the 13 errors shown
here are actual errors the other four are mislabeled in LFW.
5.7. Performance on Youtube Faces DB
We use the average similarity of all pairs of the first one
hundred frames that our face detector detects in each video.
This gives us a classification accuracy of 95.12%±0.39.
Using the first one thousand frames results in 95.18%.
Compared to [17] 91.4% who also evaluate one hundred
frames per video we reduce the error rate by almost half.
DeepId2+ [15] achieved 93.2% and our method reduces this
error by 30%, comparable to our improvement on LFW.
5.8. Face Clustering
Our compact embedding lends itself to be used in order
to cluster a users personal photos into groups of people with
the same identity. The constraints in assignment imposed
by clustering faces, compared to the pure verification task,
lead to truly amazing results. Figure 7 shows one cluster in
a users personal photo collection, generated using agglom-
erative clustering. It is a clear showcase of the incredible
invariance to occlusion, lighting, pose and even age.
Figure 7. Face Clustering. Shown is an exemplar cluster for one
user. All these images in the users personal photo collection were
clustered together.
6. Summary
We provide a method to directly learn an embedding into
an Euclidean space for face verification. This sets it apart
from other methods [15, 17] who use the CNN bottleneck
layer, or require additional post-processing such as concate-
nation of multiple models and PCA, as well as SVM clas-
sification. Our end-to-end training both simplifies the setup
and shows that directly optimizing a loss relevant to the task
at hand improves performance.
Another strength of our model is that it only requires
False accept
False reject
Figure 6. LFW errors. This shows all pairs of images that were
incorrectly classified on LFW. Only eight of the 13 errors shown
here are actual errors the other four are mislabeled in LFW.
5.7. Performance on Youtube Faces DB
We use the average similarity of all pairs of the first one
hundred frames that our face detector detects in each video.
This gives us a classification accuracy of 95.12%±0.39.
Using the first one thousand frames results in 95.18%.
Compared to [17] 91.4% who also evaluate one hundred
frames per video we reduce the error rate by almost half.
DeepId2+ [15] achieved 93.2% and our method reduces this
error by 30%, comparable to our improvement on LFW.
5.8. Face Clustering
Our compact embedding lends itself to be used in order
to cluster a users personal photos into groups of people with
the same identity. The constraints in assignment imposed
by clustering faces, compared to the pure verification task,
Figure 7. Face Clustering. Shown is an exemplar cluster for one
user. All these images in the users personal photo collection were
clustered together.
6. Summary
We provide a method to directly learn an embedding into
an Euclidean space for face verification. This sets it apart
from other methods [15, 17] who use the CNN bottleneck
layer, or require additional post-processing such as concate-
nation of multiple models and PCA, as well as SVM clas-
52. Business Area
Medical Image Analysis
VUNOnet and our machine learning technology will help doctors and hospitals manage
medical scans and images intelligently to make diagnosis faster and more accurately.
Original Image Automatic Segmentation EmphysemaNormal ReticularOpacity
Our system finds DILDs at the highest accuracy * DILDs: Diffuse Interstitial Lung Disease
Digital Radiologist
54. Constructing higher-level
contextual/relational features:
Relationships between epithelial
nuclear neighbors
Relationships between morphologically
regular and irregular nuclei
Relationships between epithelial
and stromal objects
Relationships between epithelial
nuclei and cytoplasm
Characteristics of
stromal nuclei
and stromal matrix
Characteristics of
epithelial nuclei and
epithelial cytoplasm
Building an epithelial/stromal classifier:
Epithelial vs.stroma
classifier
Epithelial vs.stroma
classifier
B
Basic image processing and feature construction:
H&E image Image broken into superpixels Nuclei identified within
each superpixel
A
Relationships of contiguous epithelial
regions with underlying nuclear objects
Learning an image-based model to predict survival
Processed images from patients Processed images from patients
C
D
onNovember17,2011stm.sciencemag.orgwnloadedfrom
TMAs contain 0.6-mm-diameter cores (median
of two cores per case) that represent only a small
sample of the full tumor. We acquired data from
two separate and independent cohorts: Nether-
lands Cancer Institute (NKI; 248 patients) and
Vancouver General Hospital (VGH; 328 patients).
Unlike previous work in cancer morphom-
etry (18–21), our image analysis pipeline was
not limited to a predefined set of morphometric
features selected by pathologists. Rather, C-Path
measures an extensive, quantitative feature set
from the breast cancer epithelium and the stro-
ma (Fig. 1). Our image processing system first
performed an automated, hierarchical scene seg-
mentation that generated thousands of measure-
ments, including both standard morphometric
descriptors of image objects and higher-level
contextual, relational, and global image features.
The pipeline consisted of three stages (Fig. 1, A
to C, and tables S8 and S9). First, we used a set of
processing steps to separate the tissue from the
background, partition the image into small regions
of coherent appearance known as superpixels,
find nuclei within the superpixels, and construct
Constructing higher-level
contextual/relational features:
Relationships between epithelial
nuclear neighbors
Relationships between morphologically
regular and irregular nuclei
Relationships between epithelial
and stromal objects
Relationships between epithelial
nuclei and cytoplasm
Characteristics of
stromal nuclei
and stromal matrix
Characteristics of
epithelial nuclei and
epithelial cytoplasm
Epithelial vs.stroma
classifier
Epithelial vs.stroma
classifier
Relationships of contiguous epithelial
regions with underlying nuclear objects
Learning an image-based model to predict survival
Processed images from patients
alive at 5 years
Processed images from patients
deceased at 5 years
L1-regularized
logisticregression
modelbuilding
5YS predictive model
Unlabeled images
Time
P(survival)
C
D
Identification of novel prognostically
important morphologic features
basic cellular morphologic properties (epithelial reg-
ular nuclei = red; epithelial atypical nuclei = pale blue;
epithelial cytoplasm = purple; stromal matrix = green;
stromal round nuclei = dark green; stromal spindled
nuclei = teal blue; unclassified regions = dark gray;
spindled nuclei in unclassified regions = yellow; round
nuclei in unclassified regions = gray; background =
white). (Left panel) After the classification of each
image object, a rich feature set is constructed. (D)
Learning an image-based model to predict survival.
Processed images from patients alive at 5 years after
surgery and from patients deceased at 5 years after
surgery were used to construct an image-based prog-
nostic model. After construction of the model, it was
applied to a test set of breast cancer images (not
used in model building) to classify patients as high
or low risk of death by 5 years.
www.ScienceTranslationalMedicine.org 9 November 2011 Vol 3 Issue 108 108ra113 2
onNovember17,2011stm.sciencemag.orgDownloadedfrom
Digital Pathologist
Sci Transl Med. 2011 Nov 9;3(108):108ra113
A comprehensive analysis of automatically quantitated morphological features could identify characteristics of prognostic relevance and provide
an accurate and reproducible means for assessing prognosis from microscopic image data.
55. Digital Pathologist
Sci Transl Med. 2011 Nov 9;3(108):108ra113
Top stromal features associated with survival.
primarily characterizing epithelial nuclear characteristics, such as
size, color, and texture (21, 36). In contrast, after initial filtering of im-
ages to ensure high-quality TMA images and training of the C-Path
models using expert-derived image annotations (epithelium and
stroma labels to build the epithelial-stromal classifier and survival
time and survival status to build the prognostic model), our image
analysis system is automated with no manual steps, which greatly in-
creases its scalability. Additionally, in contrast to previous approaches,
our system measures thousands of morphologic descriptors of diverse
identification of prognostic features whose significance was not pre-
viously recognized.
Using our system, we built an image-based prognostic model on
the NKI data set and showed that in this patient cohort the model
was a strong predictor of survival and provided significant additional
prognostic information to clinical, molecular, and pathological prog-
nostic factors in a multivariate model. We also demonstrated that the
image-based prognostic model, built using the NKI data set, is a strong
prognostic factor on another, independent data set with very different
SD of the ratio of the pixel intensity SD to the mean intensity
for pixels within a ring of the center of epithelial nuclei
A
The sum of the number of unclassified objects
SD of the maximum blue pixel value for atypical epithelial nuclei
Maximum distance between atypical epithelial nuclei
B
C
D
Maximum value of the minimum green pixel intensity value in
epithelial contiguous regions
Minimum elliptic fit of epithelial contiguous regions
SD of distance between epithelial cytoplasmic and nuclear objects
Average border between epithelial cytoplasmic objects
E
F
G
H
Fig. 5. Top epithelial features. The eight panels in the figure (A to H) each
shows one of the top-ranking epithelial features from the bootstrap anal-
ysis. Left panels, improved prognosis; right panels, worse prognosis. (A) SD
of the (SD of intensity/mean intensity) for pixels within a ring of the center
of epithelial nuclei. Left, relatively consistent nuclear intensity pattern (low
score); right, great nuclear intensity diversity (high score). (B) Sum of the
number of unclassified objects. Red, epithelial regions; green, stromal re-
gions; no overlaid color, unclassified region. Left, few unclassified objects
(low score); right, higher number of unclassified objects (high score). (C) SD
of the maximum blue pixel value for atypical epithelial nuclei. Left, high
score; right, low score. (D) Maximum distance between atypical epithe-
lial nuclei. Left, high score; right, low score. (Insets) Red, atypical epithelial
nuclei; black, typical epithelial nuclei. (E) Minimum elliptic fit of epithelial
contiguous regions. Left, high score; right, low score. (F) SD of distance
between epithelial cytoplasmic and nuclear objects. Left, high score; right,
low score. (G) Average border between epithelial cytoplasmic objects. Left,
high score; right, low score. (H) Maximum value of the minimum green
pixel intensity value in epithelial contiguous regions. Left, low score indi-
cating black pixels within epithelial region; right, higher score indicating
presence of epithelial regions lacking black pixels.
onNovember17,2011stm.sciencemag.orgDownloadedfrom
and stromal matrix throughout the image, with thin cords of epithe-
lial cells infiltrating through stroma across the image, so that each
stromal matrix region borders a relatively constant proportion of ep-
ithelial and stromal regions. The stromal feature with the second
largest coefficient (Fig. 4B) was the sum of the minimum green in-
tensity value of stromal-contiguous regions. This feature received a
value of zero when stromal regions contained dark pixels (such as
inflammatory nuclei). The feature received a positive value when
stromal objects were devoid of dark pixels. This feature provided in-
formation about the relationship between stromal cellular composi-
tion and prognosis and suggested that the presence of inflammatory
cells in the stroma is associated with poor prognosis, a finding con-
sistent with previous observations (32). The third most significant
stromal feature (Fig. 4C) was a measure of the relative border between
spindled stromal nuclei to round stromal nuclei, with an increased rel-
ative border of spindled stromal nuclei to round stromal nuclei asso-
ciated with worse overall survival. Although the biological underpinning
of this morphologic feature is currently not known, this analysis sug-
gested that spatial relationships between different populations of stro-
mal cell types are associated with breast cancer progression.
Reproducibility of C-Path 5YS model predictions on
samples with multiple TMA cores
For the C-Path 5YS model (which was trained on the full NKI data
set), we assessed the intrapatient agreement of model predictions when
predictions were made separately on each image contributed by pa-
tients in the VGH data set. For the 190 VGH patients who contributed
two images with complete image data, the binary predictions (high
or low risk) on the individual images agreed with each other for 69%
(131 of 190) of the cases and agreed with the prediction on the aver-
aged data for 84% (319 of 380) of the images. Using the continuous
prediction score (which ranged from 0 to 100), the median of the ab-
solute difference in prediction score among the patients with replicate
images was 5%, and the Spearman correlation among replicates was
0.27 (P = 0.0002) (fig. S3). This degree of intrapatient agreement is
only moderate, and these findings suggest significant intrapatient tumor
heterogeneity, which is a cardinal feature of breast carcinomas (33–35).
Qualitative visual inspection of images receiving discordant scores
suggested that intrapatient variability in both the epithelial and the
stromal components is likely to contribute to discordant scores for
the individual images. These differences appeared to relate both to
the proportions of the epithelium and stroma and to the appearance
of the epithelium and stroma. Last, we sought to analyze whether sur-
vival predictions were more accurate on the VGH cases that contributed
multiple cores compared to the cases that contributed only a single
core. This analysis showed that the C-Path 5YS model showed signif-
icantly improved prognostic prediction accuracy on the VGH cases
for which we had multiple images compared to the cases that con-
tributed only a single image (Fig. 7). Together, these findings show
a significant degree of intrapatient variability and indicate that increased
tumor sampling is associated with improved model performance.
DISCUSSION
Heat map of stromal matrix
objects mean abs.diff
to neighbors
H&E image separated
into epithelial and
stromal objects
A
B
C
Worse
prognosis
Improved
prognosis
Improved
prognosis
Improved
prognosis
Worse
prognosis
Worse
prognosis
Fig. 4. Top stromal features associated with survival. (A) Variability in ab-
solute difference in intensity between stromal matrix regions and neigh-
bors. Top panel, high score (24.1); bottom panel, low score (10.5). (Insets)
Top panel, high score; bottom panel; low score. Right panels, stromal matrix
objects colored blue (low), green (medium), or white (high) according to
each object’s absolute difference in intensity to neighbors. (B) Presence
R E S E A R C H A R T I C L E
onNovember17,2011stm.sciencemag.orgDownloadedfrom
Top epithelial features.The eight panels in the figure (A to H) each
shows one of the top-ranking epithelial features from the bootstrap
anal- ysis. Left panels, improved prognosis; right panels, worse prognosis.
65. • 3D object is constructed by adding material in layers (usually sprayed)
• Materials: rubber, plastics, paper, polyurethane, metals, and even cells
3D printers: Replicators in the real world
78. Bioresorbable Airway Splint
Created with a Three-Dimensional Printer
N Engl J Med 2013; 368:2043-2045
• A custom-designed and custom-fabricated resorbable airway splint, which was
manufactured from polycaprolactone with the use of a 3D printer
• Our bellowed topology design provides resistance against collapse while
simultaneously allowing flexion, extension, and expansion with growth.
79. N Engl J Med 2013; 368:2043-2045
One year after surgery, imaging and
endoscopy showed a patent left
mainstem bronchus
80. Morrison RJ et al. Sci Transl Med. 2015
Fig. 1. Computational image-
based design of 3D-printed tracheo-
bronchialsplints.(A)Stereolithography
(.STL) representation (top) and virtual
rendering (bottom) of the tracheo-
bronchial splint demonstrating the
bounded design parameters of the
device. We used a fixed open angle
of 90° to allow placement of the de-
vice over the airway. Inner diameter,
length, wall thickness, and number
and spacing of suture holes were
adjusted according to patient anato-
my (Table 1) and can be adjusted on
the submillimeter scale. Bellow height
and periodicity (ribbing) can be
adjusted to allow additional flexion
of the device in the z axis. (B) Mecha-
nismofactionofthetracheobronchial
splint intreatingtracheobronchialcol-
lapse in TBM. Solid arrows denote
positive intrathoracic pressure gener-
ated on expiration. Hollow arrow de-
notes vector of tracheobronchial
collapse. Dashed arrow denotes
vector of opening wedge displace-
ment of the tracheobronchial splint
with airway growth. (C) Digital Imag-
ingandCommunicationsinMedicine
(DICOM) images of the patient’s CT
scan were used to generate a 3D
model of the patient’s airway via seg-
mentation in Mimics. A centerline
was fit within the affected segment
of the airway, and measurements of
airway hydraulic diameter (DH) and
length were used as design param-
eters to generate the device design.
(D) Design parameters were input
into MATLAB to generate an output
as a series of 2D. TIFF image slices
using Fourier series representation.
Light and gray areas indicate struc-
tural components; dark areas are
voids. The top image demonstrates
a device bellow, and the bottom
image demonstrates suture holes in-
corporated into the device design.
The .TIFF images were imported into
Mimics to generate an. STL of the final
splint design. (E) Virtual assessment of
fit of tracheobronchial splint over
segmented primary airway model
for all patients. (F) Final 3D-printed
PCL tracheobronchial splint used to
treat the left bronchus of patient 2.
The splint incorporated a 90° spiral
to the open angle of the device to
accommodate concurrent use of a
right bronchial splint and growth
of the right bronchus.
R E S E A R C H A R T I C L E
Mitigation of tracheobronchomalacia with 3D-printed
personalized medical devices in pediatric patients
81. DISCUSSION
We report successful implantation of 3D-printed, patient-specific bio-
resorbable airway splints for treatment of severe TBM. The personalized
splints conformed to the patients’ individual geometries and expanded
compression) (20). Thus, we defined our maximum compressive allow-
ance as less than 50% deformation under a 20-N load. However, a sim-
ilar degree of bending compliance was too low for the splint to be
effective at maintaining airway patency. We expected that under a
20-N load, the splint should allow greater than 20% displacement in
bending to accommodate flexion of the airway but less than 50%
displacement (greater than which may interrupt airflow).
Fig. 2. Pre- and postoperative imaging of patients. Black arrrows in all figures denote location of
the malcic segment of the airway. White arrows designate the location/presence of the tracheo-
bronchial splint. Asterisk denotes focal degradation of splint. All CT images are coronal minimum
intensity projection (MinIP) reformatted images of the lung and airway on expiration. All MRI images
are axial proton density turbospin echo MRI images of the chest. (A) Preoperative (top) and 1-month
postoperative (upper middle) CT images of patient 1. Postoperative MRI (lower middle) demonstrated
presence of splint around left bronchus in patient 1 at 12 months and focal fragmentation of splint due
to degradation at 39 months (bottom). (B) Preoperative (top) and 1-month postoperative (upper mid-
dle) CT images of patient 2. Postoperative MRI (lower middle) demonstrated presence of splints around the left and right bronchi in patient 2 at 1 month. Note
that the patient had bilateral mainstem bronchomalacia and received a tracheobronchial splint on both the left and right mainstem bronchus. (C) Preoperative
(top) and 1-month postoperative (bottom) CT images of patient 3.
www.ScienceTranslationalMedicine.org 29 April 2015 Vol 7 Issue 285 285ra64 5
Morrison RJ et al. Sci Transl Med. 2015
Mitigation of tracheobronchomalacia with 3D-printed
personalized medical devices in pediatric patients
82. pressure (table S2). Patient airway image–based computational design
coupled with 3D printing allowed rapid production of these devices. The
regulatory approval process and evaluation of patient candidacy needed 7
days. All devices were completed within this time frame. Design and
MATERIALS AND METHODS
Study design
Our hypothesis was that an external splint could be designed to obtain
Fig. 4. Mean airway caliber over time. Patient airway DH was
measured over time after implantation of the 3D-printed bioresorbable
material. Solid lines denote bronchi that received the tracheobronchial
splint. Dashed lines are normal, contralateral bronchi for patients 1 and
3. All caliber measurements were made on expiratory-phase CT imaging
using the centerline function of each isolated bronchus in Mimics. The
centerline function measures DH every 0.1 to 1.0 mm along the entire
segment of the isolated model. Measurements are represented as
averages of all measurements along the length of the isolated affected
bronchus model ± SD. Pre-op, preoperative.
R E S E A R C H A R T I C L E
Morrison RJ et al. Sci Transl Med. 2015
Mitigation of tracheobronchomalacia with 3D-printed
personalized medical devices in pediatric patients
83. 3D Printed Skull
• A 22-year-old female from the Netherlands
• A chronic bone disorder, which has increased the thickness of her skull
from 1.5 to 5cm causing reduced eyesight and severe headaches.
• Top section of skull was removed and replaced with a 3D printed implant.
March 2014
84. 3D Printed Skull
• Since the operation, the patient has gained her sight back entirely,
is symptom-free and back to work.
March 2014
85. by prof. Hyung Jin Choi (SNU)
3D printers for the anatomy education
You cannot physically touch the 3D simulated models
86. by prof. Hyung Jin Choi (SNU)
3D printers for the anatomy education
87. by prof. Hyung Jin Choi (SNU)
3D printers for anatomy education
88. Digital Future of the Surgery
• Wearable Devices
• Augmented Reality
• Artificial Intelligence
• 3D Printings