Mga bagong artikulong nauugnay sa pananaliksik ng may-akdang ito. Email address para sa mga update. Tapos na. Aking profile Aking library Metrics Mga Alerto. Mga setting. Mag-sign in. Magkaroon ng sarili kong profile Sinipi ni Tingnan lahat Lahat Simula Mga Pagsipi h-index 10 8 iindex 11 7. Profesor, Universitatea din Craiova. Mga Artikulo Sinipi ni. We work with a varity of imaging domains, including radiology,, pathology, and ophthalmology. The goal in applying these methods to large numbers of images is to automatically detect abnormalities, to segment them, and to identify phenotypes in the images that can be used for automatic disease classification and to enable "precision medicine.
We have a number of publications on these topics in the Publications area. Pathology images contain a wealth of information at the microscopic scale, revealing information about tissue morphology and--through the use of special stains--the underlying biological function. Most pathology images, like radiology images, are currently interpreted by human observers pathologists , but many subtle features of the disease may be overlooked.
We are developing methods for computerized analysis of quantitative features within pathology images with the goal of defining "imaging phenotypes" that better characterize disease and enable precision medicine. Tasks we focus on include:. Imaging is crucial for assessing patients with cancer and for monitoring their response to treatment. However, current methods for quantifying the amount of tumor in the body, whether it is biologically-active, and whether it is optimally responding to treatment are limited to simplistic measurements that are inaccurate and subject to inter-observer variation.
In addition, the workflow for capturing quantitative information from images is difficult, time-consuming, and costly. Our goal is to tools to automate and streamline to process of identifying, measuring, and assessing the amount of tumor in patients and enable oncologists to readily determine the response of individual patients and cohorts to a variety of cancer treatments. Imaging is a major component of evaluation of the retina.
Beyond fundus photography, the advent of optical coherence tomography OCT is revolutionizing practice by permitting high-resolution, three-dimensional imaging of the retina. OCT images contain an enormous amount of information that characterize the phenotype of retinal diseases, but presently only a fraction of the information in OCT images is extracted and used for clinical decision making. Our Quantitative Retinal Image Group is developing methods to optimally leverage latent quantitative information in OCT images to enable precision care in Ophthalmology.
We have several projects in quantitative retinal imaging which collectively comprise a pipline of processing these large datasets leading from the raw image to disease assessment and clinical decision support. We are currently applying these methods to provide robust disease assessment and prediction of clinical outcome in AMD, glacoma, diabetic retinopathy, and retinitis pigmentosa.
Full Text. C, Burgess S. Videos from 20 different TV shows for prediction social actions: handshake, high five, hug, kiss and none. Windows 7 , software environment e. The study of part-whole relations is an entire field in itself - "mereology" - this note is intended only to deal with straightforward cases for defining classes involving part-whole relations.
Images contain detailed information describing the phenotype of disease. Much exciting research is leveraging the biological signal in molecular data to discovery subtypes of disease, however, analyses based purely on molecular data miss the opportunities of incorporating phenotypic information about disease into computational models to characterize the disease. Our lab is developing methods to integrate the information characterizing disease phenotype in images with moecular data and applying machine learning methods to discover subtypes of disease and to create computational models to predict the best treatments, to assess treatment response, and to predict clinical outcomes.
Our computational approach is to develop computational models that extend the biological dogma, relating the phenotype in images to the molecular underpinnings of those phenotypes, consistent with the dogma that "form follows function. The number of images in Radiology is exploding. Diagnostic radiologists are confronted with the challenge of efficiently and accurately interpreting cross sectional imaging exams that often now contain thousands of images per patient study.
Currently, this is largely an unassisted process, and a given reader's accuracy is established through training and experience. There is significant variation in interpretation between radiologists, and accuracy varies widely, a problem compounded by increasing image numbers. There is an opportunity to improve diagnostic decision making by enabling radiologists to search databases of radiological images and reports for cases that are similar in terms of shared imaging features to the images they are interpreting.
We are creating software tools that can be used to create and to search databases of radiological images based on image features, which include detailed information about lesions: 1 feature descriptors coded by radiologists using RadLex , a comprehensive controlled terminology, and 2 computer-generated features of pixels characterizing the lesion's interior texture and the sharpness of its boundary.
Our goal is to develop methods to facilitate the retrieval of radiological images that contain similarly appearing lesions. Medical images contain a wealth of information, such as anatomy and pathology, which is often not explicit and computationally accessible. Information schemes are being developed to describe the semantic content of images, but such schemes can be unwieldy to operationalize because there are few tools to enable users to capture structured information easily as part of the routine research workflow.
Creating New Medical Ontologies for Image Annotation focuses on the problem A Case Study Content-Based Image Retrieval in Medical Images Databases. Creating New Medical Ontologies for Image Annotation. A Case Study. Authors: Stanescu, L., Burdescu, D.D., Brezovan, M., Mihai, C.G.. Free Preview.
We have created ePad, an open source tool enabling researchers and clinicians to create semantic annotations on images. Image annotations are saved in a variety of formats, enabling interoperability among medical records systems, image archives in hospitals, and the Semantic Web. This example from na1 indicates that the specificity of the annotations is not significantly different for specific instances, but is different when compared overall.
Phytozome's method stands out because it has no annotations for na1 ; thus, all the metrics have a value of 0. To determine how best to create an improved maize annotation dataset, we tried out multiple different methods and compared the resulting datasets based on a gold standard set of gene functions.
This also enabled us to better understand differences in term assignments among the methods we used. Using the same gold standard, we also were able to compare resulting datasets to those produced by and available from Gramene and Phytozome. This could be due to the dearth of training dataset for the more specific GO terms, which is required for training machine learning methods. While the higher coverage could cause concern, evaluating the annotations using the gold standard has shown that the performance is similar or better than existing datasets.
Removing less specific GO terms annotated by some methods in cases where there were more specific GO terms annotated to the same genes was important for aggregating different datasets. In certain cases, more than one annotation with lower specificity was replaced by a single annotation with higher specificity. As future iterations of the CAFA competition evaluate new tools and methods for GO annotations, we anticipate that the quality of computational maize GO annotations could be iteratively improved in a reproducible manner by continuing to apply the newest, best performing methods.
To enable better reproducibility, we have generated a supplementary document with exact parameters and commands used to generate the maize dataset. The pipeline will be made freely available and will utilize the same methods and datasets used for maize. The set of manually reviewed gene function annotations for maize that we call the gold standard is both incomplete and sparse.
This situation does not reflect the amount of published literature describing gene function for maize. Instead, this situation is due to limited curation of gene function into GO terms. While tools exist at MaizeGDB that enable researchers to assign GO terms to genes directly, these tools remain poorly utilized. Please note: The publisher is not responsible for the content or functionality of any supporting information supplied by the authors.
Any queries other than missing content should be directed to the corresponding author for the article. Volume 2 , Issue 4.
The full text of this article hosted at iucr. If you do not receive an email within 10 minutes, your email address may not be registered, and you may need to create a new Wiley Online Library account. If the address matches an existing account you will receive an email with instructions to retrieve your username.
Plant Direct Volume 2, Issue 4. Carson M. Tools Request permission Export citation Add to favorites Track citation. Share Give access Share full text access. Share full text access. Please review our Terms and Conditions of Use and check box below to share full-text version of article. Inferred from Electronic Annotation. Figure 1 Open in figure viewer PowerPoint. Numbers of annotations by EC category.
Each bar in the histogram is labeled with the actual count to show where counts are so small that no bar is visible. Figure 2 Open in figure viewer PowerPoint. For each individual output, duplications and redundancies were removed, and then, the datasets were combined. Argot2 Argot2 has a batch processing tool that can annotate up to 5, preprocessed input sequences. Figure 3 Open in figure viewer PowerPoint. GO assignment metrics for each method type. Color codes as used in a and b , with the aggregate dataset shown in orange. Figure 4 Open in figure viewer PowerPoint.
Figure 5 Open in figure viewer PowerPoint.
Biological Process GO graph for maize na1. Leaf terms are toward the bottom, and root terms are toward the top. Terms covered only by the gold standard are shown in orange labeled G , those in the dataset but absent from the gold standard are shown in blue labeled D , and those that appear in both are shown in green labeled DG.
Supporting Information Filename Description pldsupSupinfo. Altschul, S. Basic local alignment search tool. Journal of Molecular Biology , , — Google Scholar. Citing Literature. Volume 2 , Issue 4 April e Figures References Related Information. Close Figure Viewer. Browse All Figures Return to Figure. Previous Figure Next Figure. Email or Customer ID. Forgot password?
Old Password. New Password. Password Changed Successfully Your password has been changed.