The evolution of intracellular compartments: from evolutionary cell biology to translational researchSubmitted by smadeira on Sat, 12/04/2010 - 00:37.
Eukaryotic cells have a complex organization into membrane-delimited organelles. Whereas we have been steadily accumulating mechanistic data on the organization and regulation of these compartments, far less is understood about their origins and evolution. I will discuss our recent work in the evolutionary analysis of three types of intracellular compartments: endosymbiotic, endomembranous, and microtube-derived. To study evolution at a this level we are having to develop new sets of tools, from neutral models at the whole genome evolution level, sequence classification methods, ways to link molecular information with morphology and databases. I will discuss how we are using these tools to discover new principles and new molecular components. Unexpectedly, this evolutionary approach is leading us to new ways of finding drugable targets and to reposition existing drugs.
We provide our personal vision of what could be the next generation of Web search engines, based on a single premise: people do not really want to search, they want to get tasks done. Hence, the key to a better experience will come from the combination of the deeper analysis of content with the detailed inference of user intent. To achieve this, the main ideas are: (1) in place of the indexing that search engines traditionally perform, we have a content analysis phase that spots entities such as people, places and dates in documents; (2) at query time we assign an intent to the user based on the query and its context; and then (3) we retrieve entities matching the intent and assemble a results page not of documents, but of matching entities and their attributes.
In this talk, we will discuss graph cores, graph clustering and their application to a real problem. A core in a graph is usually taken as a set of highly connected vertices. Although general, this definition is intuitive and useful for studying the structure of many real networks. Nevertheless, depending on the problem, different formulations of graph core may be required, leading us to the known concept of generalized core. Thus, we study and further extend the notion of generalized core. Given a graph, we propose a definition of graph core based on a subset of its subgraphs and on a subgraph property function. Our approach generalizes several notions of graph core proposed independently in the literature, introducing a general and theoretical sound framework for the study of fully generalized graph cores. Moreover, we discuss emerging applications of graph cores, such as improved graph clustering methods and complex network motif detection. In particular, we discuss an application to query log mining.
Systems Biology is an emerging field within bioscience, that uses holism,a global and integrative perspective rather than reductionism to explain the biological system's behavior. This approach is particularly useful to quantitatively characterize and predict the systems dynamic.In our application multivariate time-series of Lactococcus lactis metabolite concentrations are measured in perturbation experiments. Prior knowledge about the metabolic network topology is represented in the form of parametrized nonlinear ordinary differential equations. Our goal is to identify appropriate models and parameters to the network.
In this talk two different approaches will be introduced for parameter estimation: Bayesian filtering and unified modeling of Glucose uptake. We concluded that Bayesian approach offers powerful tools for identifying parameters of such networks if the identifiability is granted. Taking several different experiments in account may results model parameters that can describe the systems behavior in various conditions.
Some scientists use Excel as the main application for data storage and analysis. This approach leads to data dispersion and knowledge segregation in organizations, mainly because Excel files are usually stored in personal computers and data contained in these files cannot be queried. Organizations dealing with constant changes in their knowledge domain, such as the life sciences, have been adopting semantic web technologies to withstand large amounts of data and obtain the flexibility needed to support ontology changes over time with minimal impact on the existing data. By mapping OWL ontologies into the Excel object model, we demonstrate that is possible for end users to still use Excel as front-end and provide organizations with the means to store data in an aggregate manner, allowing a more thorough data analysis.
In this talk we will study algorithms for the max-plus product of Monge matrices. These algorithms use the underlying regularities of the matrices to be faster than the general multiplication algorithm, hence saving time. A non-naive solution is to iterate the SMAWK algorithm. For specific classes there are more efficient algorithms. We present a new multiplication algorithm (MMT), that is efficient for general Monge matrices and also for specific classes. The theoretical and empirical analysis shows that MMT operates in near optimal space and time. Hence we give further insight into an open problem proposed by Landau. The resulting algorithms have several applications in bioinformatics, in particular Monge matrices occur in genome alignment problems.
Computational Methods for the characterization and detection of protein binding sequences through information theorySubmitted by smadeira on Wed, 06/23/2010 - 20:14.
Regulatory sequence detection is a critical facet for understanding the cell mechanisms in order to coordinate the response to stimuli. Protein synthesis involves the binding of a transcription factor to specific sequences in a process related to the gene expression initiation. A characteristic of this binding process is that the same factor binds with different sequences placed along all genome. Thus, any computational approach shows many difficulties related with this variability observed from the binding sequences. Our job proposes the detection of transcription factor binding sites based on a parametric uncertainty measurement (Rényi entropy). This detection algorithm evaluates the variation on the total Rényi entropy of a set of sequences when a candidate sequence is assumed to be a true binding site belonging to the set.
Mathematical modeling is becoming established in the immunologist’s toolbox as a method to gain insight into the dynamics of the immune response and its components. No more so than in the case of the study of human immunodeficiency virus (HIV) infection. I will review different areas of the study of the dynamics of CD4+ T-cells in the setting of HIV, where modeling played important and diverse roles in helping us understand CD4+ T-cell homeostasis and the effect of HIV infection on T-cell dynamics, and the processes of T-cell production and destruction.
There are several semantic sources that can be found in the Web that are either explicit, e.g. Wikipedia, or implicit, e.g. derived from Web usage data. Most of them are related to user generated content (UGC) or what is called today the Web 2.0. In this talk we show several applications of mining the wisdom of crowds behind UGC to improve search. We will show live demos to find relations in the Wikipedia or to improve image search as well as our current research in the topic. Our final goal is to produce a virtuous data feedback circuit to leverage the Web itself.