March 27, 2019 (Wednesday), 15:00, Room 408 (TPOC-3)




March 13, 2019 (Wed), 15:00, Room 408 (TPOC-3)

Seminar “PeerReview4All: Fair and Accurate Reviewer Assignment in Peer Review” by Ivan Stelmakh (Carnegie Mellon University​)​​



February 20, 2019 (Wed), 13:00, Room 351 (TPOC-4)




TUE Nov 27 2018, 13-00 room 237 (TROC-4)

Prof. Dylov’s group gathering for brainstorming medical text mining projects. The students will give an overview of their work. If anyone happens to be available and finds the subject interesting, please join (or forward)!


TUE Nov 20 2018, 14-00 room 351 (TROC-4)

Section seminar (MMDS)

Dr. Vladimir Palyulin (research scientist, CDISE): “How to compute the viscoelastic properties of polymer glasses from their structure”


 TUE, Nov 13 2018, 14:00-15:00, TPOC-4 (room to be determined)

Dr. Eugene Podryabinkin (Skoltech CEES): Active learning of linearly parametrized interatomic potentials


Friday, Nov 2 2018, 11:00-12:30, TPOC-4 room 351

Prof. Édouard Oudet (Univ Grenoble) : “NUMERICAL STUDY OF 1D OPTIMAL STRUCTURES


We focus our attention on shape optimization problems in which one dimensional connected objects are involved. Very old and classical problems in calculus of variation are of this kind: Euclidean Steiner’s tree problem, optimal irrigation networks, cracks propagation, etc. In a first part we quickly recall some previous work in collaboration with F. Santambrogio related to the functional relaxation of the irrigation cost. We establish a Γ-convergence of Modica and Mortola’s type and illustrate its efficiency from a numerical point of view by computing optimal networks associated to simple sources/sinks configurations. We also present more evolved situations with non Dirac sinks in which a fractal behavior of the optimal network is expected. In the second part of the talk we restrict our study to the Euclidean Steiner’s tree problem. We recall recent numerical approach which have been developed the last five years to approximate optimal trees: partitioning formulation, relaxation with geodesic distance terms and energetic constraints.

We describe the first results obtained in collaboration with A. Massaccesi and B. Velichkov to certify the optimality of a given tree. With our discrete parametrization of generalized calibration, we are able to recover the theoretical optimal matrix fields which certify the optimality of simple trees associated to the vertices of regular polygons. Finally, we focus on the delicate problem of the identification of the optimal structure. Based on a recent approach obtained in collaboration with G. Orlandi and M. Bonafini, we describe the first convexification framework associated to the Euclidean Steiner tree problem which provide relevant tools from a numerical point of view.

SPEAKER INTRODUCTION:  Professor Édouard OUDET scientific interests include Shape Optimization, Convex Geometry, Calculus of Variations, Computational Mathematics. He received  PhD in Applied Mathematics in 2002, Habilitation à Diriger les Recherches in 2009, and since 2011 he is Full Professor in Applied Mathematics, at University of Grenoble Alpes (France).


Tuesday, Oct 30 2018, 14:00-15:00, TPOC-4 room 351

Prof. Stefan Roth (TU Darmstadt) : “Deep Learning with Perceptually-Motivated and Probabilistic Networks”

Supervised learning with deep convolutional networks is the workhorse of the majority of computer vision research today. While much progress has been made already, exploiting deep architectures with standard components, enormous datasets, and massive computational power, I will argue that it pays to scrutinize some of the components of modern deep networks. I will begin with looking at the common pooling operation and show how we can replace standard pooling layers with a perceptually-motivated alternative, with consistent gains in accuracy. Next, I will show how we can leverage self-similarity, a well known concept from the study of natural images, to derive non-local layers for various vision tasks that boost the discriminative power. Finally, I will present a lightweight approach to obtaining predictive probabilities in deep networks, allowing to judge the reliability of the prediction.

Speaker’s bioStefan Roth received the Diplom degree in Computer Science and Engineering from the University of Mannheim, Germany in 2001. In 2003 he received the ScM degree in Computer Science from Brown University, and in 2007 the PhD degree in Computer Science from the same institution. Since 2007 he is on the faculty of Computer Science at Technische Universität Darmstadt, Germany (Juniorprofessor 2007-2013, Professor since 2013). His research interests include probabilistic and deep learning approaches to image modeling, motion estimation and tracking, as well as object recognition and scene understanding. He received several awards, including honorable mentions for the Marr Prize at ICCV 2005 (with M. Black) and ICCV 2013 (with C. Vogel and K. Schindler), the Olympus-Prize 2010 of the German Association for Pattern Recognition (DAGM), and the Heinz Maier-Leibnitz Prize 2012 of the German Research Foundation (DFG). In 2013, he was awarded a Starting Grant of the European Research Council (ERC). He regularly serves as an area chair for CVPR, ICCV, and ECCV, and is member of the editorial board of the International Journal of Computer Vision (IJCV), the IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), and PeerJ Computer Science


Tuesday, Oct 30 2018, 15:00-16:00, TPOC-4 room 351
 Dr. Pavlo Molchanov (NVIDIA): Accelerating CNNs with pruning and conditional inference

Convolutional neural networks (CNN) are used extensively in computer vision applications. While modern deep CNNs are composed of a variety of layer types, runtime during prediction is dominated by the evaluation of convolutional layers. With the goal of speeding up inference, a number of different accelerating techniques were proposed to reduce computations. In the talk I will focus on two methods we have being working at NVIDIA.In the beginning, I will focus on methods to remove the least important neurons from trained network, also known as pruning. Particularly we will talk on removing entire feature maps (neurons) by a variety of criterias that heuristically approximate their importance. We evaluate a number of such methods on a transfer learning task where training a smaller network is impossible without overfitting. By applying pruning we observe 6x to 10x reduction in computations for fine-grained classification. In the second part of the talk I will focus on another technique for speeding-up inference, called conditional inference. Such methods condition computation of the following layers based on the features evaluated previously and avoid redundant computations. I will talk on details of such architecture named IamNN that evolved from ResNet family and has 12x less parameters together with 3x save on computations.
Bio: Dr. Pavlo Molchanov obtained PhD from Tampere University of Technology, Finland in the area of signal processing in 2014. His dissertation was focused on designing automatic target recognition systems for radars. Since 2015 he is with Learning and Perception Research team at NVIDIA, currently holding a senior research scientist position. His research is focused on methods for neural network acceleration, and designing novel human-computer interaction systems for in-car driver monitoring. He received EuRAD best paper award in 2011 and EuRAD young engineer award in 2013.


Thursday, 11 August 2016

15:00 – 16:00 in Room 407
Igor Tetko, Helmholtz Zentrum Muenchen, Germany

OCHEM: A public platform to deposit data, develop and publish top-ranked models

I will overview On-line CHEmical database and Modelling (OCHEM platform [1], which has been recently used to contribute top-ranked approaches for EPA ToxCast [2] and NIH Tox21 [3] challenges. OCHEM contains more than 1.2M points for several hundreds properties uploaded by more than 3000 users. The platform is integrated with modelling framework and provides access to >100 models, ranging from simple linear equations to the state-of-the art algorithms based on descriptor matrices with >0.2 trillion entries. The challenges to develop models with large datasets as well as considerations used to achieve best scoring submissions for the EPA and NIH challenges will be discussed. I will also overview other available predictors for various physico-chemical and biological properties as well as will discuss how OCHEM can be used to analyse data, develop highly prediction models and interpret them. The future development of OCHEM within Marie Curie BIGCHEM project will be outlined [4].


1) Sushko I et al. J. Comput. Aided. Mol. Des. 25(6), 533-554 (2011).
2) Novotarskyi S et al Chem. Res. Toxicol. 29(5), 768-775 (2016).
3) Abdelaziz A et al  Frontiers Environ. Sci. 4(2), 2 (2016).
4) Tetko IV et al. Mol. Inf., (2016), DOI:10.1002/minf.201600073.


Monday, 18 July 2016

11:00 – 12:00 in Room 403
Aleksey Polunchenko, State University of New York at Binghamton, USA

Asymptotic Near-Minimaxity of the Shiryaev-Roberts-Pollak Change-Point Detection Procedure in Continuous Time

We consider the quickest change-point detection problem where the aim is to detect, in an optimal fashion, a possible onset of a given drift in “live”-monitored standard Brownian motion; the change-point is assumed unknown (nonrandom). We show that Pollak’s (1985) randomized version of the classical quasi-Bayesian Shiryaev-Roberts detection procedure is nearly-optimal in the minimax sense of Pollak (1985), i.e., Pollak’s (1985) maximal conditional expected delay to detection is minimized to within an additive term that vanishes asymptotically as the average run length (ARL) to false alarm level gets infinitely high. This is a strong type of optimality known in the literature as asymptotic Pollak-minimaxity of order-three. The proof is explicit in that all the relevant performance metrics are found analytically and in a closed form. This includes the Shiryaev-Roberts statistic’s quasi-stationary distribution, which is a key ingredient of Pollak’s (1985) tweak of the Shiryaev-Roberts procedure. The obtained order-three optimality is an improvement of the 2009 result of Burnaev, Feinberg and Shiryaev who proved that the randomized Shiryaev-Roberts procedure is asymptotically Pollak-minimax, but only up to the second order, i.e., the maximal conditional expected delay to detection is minimized to within an additive term that goes to a positive constant as the ARL to false alarm level gets infinitely high. The discrete-time analogue of our result was previously established by Pollak (1985).REFERENCES:

1. Burnaev, E.V., Feinberg, E.A. and Shiryaev, A.N. (2009). “On asymptotic optimality of the second order in the minimax quickest detection problem of drift change for Brownian motion”. Theory of Probability and Its Applications, 53:519–536.

2. Pollak, M. (1985). “Optimal detection of a change in distribution”. Annals of Statistics, 13:206–227.


Monday, 11 July 2016

11:00 – 12:00 in Room 407
Benjamin Fuchs, Institute of Electronics and Telecommunications
of Rennes (IETR), CNRS, France

Application of Convex Relaxation to Array Synthesis and Antenna Selection Problems

The synthesis of antenna arrays is a very long standing field in electromagnetism because of its many applications (e.g. radar, radio astronomy, sonar, communications, direction-finding, seismology, medical diagnosis and treatment). A host of methods have been proposed since the 40’s to solve increasingly difficult synthesis problems. These techniques range from analytical methods (fast but limited to very specific problems) to global optimization approaches (comprehensive but limited in performances due to their computational burden). Convex optimization has been shown to be a good trade-off in efficiency/generality between analytical and global optimization techniques in a number of relevant cases.
The purpose of the talk is to show that a variety of difficult antenna array synthesis problems can be approximated as convex optimization ones and therefore be efficiently solved. More specifically, the application of the semidefinite relaxation technique to approximate the quadratic constraints arising in many synthesis problems is described. The synthesis of shaped beams, phase-only excitations or reconfigurable arrays are instances shown to highlight the practical relevance of the proposed strategy.
In addition, the combinatorial problem of selecting antennas from among a set of possible radiators in order to optimize the performances of the array is addressed. A convex relaxation of the Boolean constraints followed by a probabilistic interpretation of the solution enables to quickly obtain bounds on the best achievable array performances and to make a good antenna selection. Numerical representative examples, such as the selection of quantized array excitations, antenna types and antenna’s locations to optimize the array performances are shown to illustrate the interest of the proposed approach.


Friday, 24 June 2016

11:00 – 12:00 in Room 403
Rafael Ballester-Ripoll, University of Zurich

Tensor Decomposition in Graphics and Interactive Visualization

Tensor decomposition is an emerging framework for manipulating and visualizing large and high-dimensional data sets, and is gaining momentum in recent years among the graphics and visual computing communities. Tensor compression usually outperforms more traditional approaches such as the Fourier and wavelet transforms, and its advantage grows larger the higher the dimensionality. Visual data sets (image stacks, computer tomography scans, time varying data) often possess a relatively low-rank tensor structure, which allows many efficient operations in the tensor-compressed domain. Furthermore, there is an increasing number of algorithms for tensor completion and black-box sampling, with applications in sparse sampling and interpolation. Thanks to these combined properties, tensor methods have found concrete applications in volume and photorealistic material rendering, interactive scientific visualization, texture synthesis, image/volume completion, and more. In this talk I will overview the most popular decomposition models (canonical, Tucker and the more recent tensor train), discuss the latest applications and my current research on tensor-based graphics and visualization, and outline future trends in these areas.


Tuesday, 21 June 2016

11:00 – 12:00 in Room 407
Prof. Mikhail Myagkov, University of Oregon & Tomsk State University

Contemporary Issues and Methodologies of Social Networks Big Data Analysis

The scientific research seminar “Contemporary Issues and Methodologies of Social Networks Big Data analysis” organised jointly by Laboratory of Big Data in Social Sciences of Tomsk State University and  Skolkovo Institute of Science and Technology, will take place on the 21st of June at 11.00 am.

During the Seminar the following topics will be discussed:

  1. The potential and the opportunities in using Big Data to analyse Social Media and Networks.
  2. Methodology of data analysis in Social Media – Social Network Analysis and Natural Language Processing: description and specifics of use.
  3. Cases of using Big Data to solve the problems of identification and clustering of groups and communities in Social Media (based on the example of extremist groups and communities).
  4. The research experience in identifying and analysing extremist groups and communities in Russian Social Media by Laboratory of Big Data in Social Sciences of Tomsk State University.

Wednesday, 15 June 2016

11:00 – 12:00 in Room 407
Prof. Vadim V. Fedorov, Ohio State University

Targeting patient-specific 3D functional microanatomy of the human heart: new therapeutic strategy for cardiac arrhythmias

Atrial fibrillation (AF) is the most common sustained arrhythmia and is associated with increased cardiovascular morbidity and mortality. Clinical studies currently lack reliable mapping approaches necessary to resolve the detailed course of fast electrical activity during AF in patients as a result of the highly complex 3D structure of the human atria. Consequently, there remains a significant debate around the mechanism driving AF, the cause of these drivers, and how best to locate and treat these patient-specific drivers in patients with cardiac diseases. For this purpose, we developed a novel approach to simultaneously map sub-endocardial and sub-epicardial activation patterns and integrate these data with ex vivo 3D gadolinium-enhanced MRI images of the atrial microanatomic architecture, including fibrosis, in order to elucidate patient-specific mechanisms of AF in diseased human hearts. We found that a limited number of sustained intramural re-entry circuits anchored to patient-specific 3D microanatomic tracks are responsible for the maintenance of AF. This translational research is a critical step toward the development of new patient-specific therapies, whereby AF drivers can be accurately defined, targeted, and successfully treated to cure the most common arrhythmia in Russia and the United States.


Friday, 27 May 2016 (note that this is the 3rd out of 3 CDISE Seminars this week)

11:00 – 12:00 in Room 148 (note that the room is different from usual)
Dr. Vladimir Rubin, Dr. Rubin IT Consulting

Process Science in the Age of Big Software

Nowadays, data science is one of the most rapidly emerging interdisciplinary fields creating thousands of new jobs. People surrounded by machines are involved in the “Internet of events“, including the Internet of things, the Internet of content, and the social networks; they continuously produce enormous amounts of data. This event data can be used not only for detecting data dependencies and patterns, but also for deriving process models. The area, which deals with discovering processes from event data, is called “process mining”. Process mining bridges the gap between classical data science and process management and constitutes the substance of a new discipline called “process science”.

In this talk, we introduce the methods of process mining; show how process mining helps to analyze and to create “big software” using the event data generated by the information systems at runtime. At the end we point out several industrial challenges, which call for the necessity of combining the areas of process and data science, architecture of information systems and agile approaches.


Thursday, 26 May 2016 (note that this is the 2nd out of 3 CDISE Seminars this week)

12:00 – 1:00pm in Room 407
Prof. Leong Hou U, University of Macau

Topic-Aware Reviewer Assignment Problems

In this talk, I will first focus on an assignment problem that addresses the peer reviewing process at academic conferences and journals. As we may know, achieving an appropriate assignment is not easy, because all reviewers should have similar workload and the subjects of the assigned papers should be consistent with the reviewers’ expertise. In this work, we propose a new group assignment of reviewers to papers, which is a generalization of the classic Reviewer Assignment Problem. We show the NP-hardness of the problem and propose a 1/2-approximation solution. In the second part, I will introduce other research work of my research group in University of Macau.


Tuesday, 24 May 2016 (note that this is the 1st out of 3 CDISE Seminars this week)

1:00 – 2:30pm in Room 407
Prof. Evgeny Burnaev, Institute for Information Transmission Problems

Change-points and Anomalies Detection in Software-Intensive Systems

As contemporary software-intensive systems reach increasingly large scale, it is imperative that failure detection schemes be developed to help prevent costly system downtimes. A promising direction towards the construction of such schemes is the exploitation of easily available measurements of system performance characteristics such as average number of processed requests, queue size per time unit, etc. In this presentation, we describe a holistic methodology for detection of abrupt changes in time series data in the presence of quasi-seasonal trends and long-range dependence with a focus on failure detection in computer systems. We propose a trend estimation method enjoying optimality properties in the presence of long-range dependent noise to estimate what is considered “normal” system behaviour. To detect change-points and anomalies, we develop an approach based on the ensembles of “weak” detectors. We demonstrate the performance of the proposed change-point detection scheme using an artificial dataset, the publicly available Abilene dataset as well as the proprietary geoinformation system dataset.


Thursday, 12 May 2016

12:00 – 13:00pm in Room 407
Prof. Vladimir Okhmatovski, University of Manitoba

Novel Single-Source Integral Equation for Solution of Electromagnetic Scattering Problems on Penetrable Objects

A new Surface–Volume–Surface Electric Field Integral Equation (SVS-EFIE) is discussed. The SVS-EFIE is derived from the volume integral equation by representing the electric field inside the scatterer as a superposition of the waves emanating from its cross section’s boundary. The SVS-EFIE has several advantages. While being rigorous in nature, it features half of the degrees of freedom compared to the traditional surface integral equation formulations such as PMCHWT and it requires only electric-field-type of Green’s function instead of both electric and magnetic field types. The latter property brings significant simplifications to solution of the scattering problems on the objects situated in multilayered media.

The SVS-EFIE equation has been developed for solution of 3D scattering problems on general penetrable objects. The SVS-EFIE has been also been applied to the solution of the quasi-magneetostatic problems of current flow in complex interconnects in both homogeneous and multilayered media. Detailed description of the method of moment discretization and resultant matrices is discussed. Due to the presence of a product of surface-to-volume and volume-to-surface integral operators, the discretization of the novel SVS-EFIE requires both surface and volume meshes. In order to validate the presented technique, the numerical results are compared with the reference solutions.


Thursday, April 28, 2016

12:00 – 13:00pm in Room 407
M.Arch. Daniel Zakharyan, British Higher School of Art and Design

Generative Creativity

Traditional design process becomes increasingly irrelevant. Incredible advancements in digital technologies multiplied by contemporary methods of fabrication that dramatically reduced the cost of manufacturing diversity and complexity allow us to drastically redefine creative process. The generative age gives us a unique opportunity to use computational techniques for creative problems, where designer, instead of creating one final object, builds a process to generate objects. This approach allows a designer to explore a multivariate space of solutions that would be unimaginable to produce by traditional means. As a result, we witness the birth of a completely new design paradigm. The paradigm that frees itself from preconceived notions and dares to go beyond the limits of human creativity.

This seminar will illustrate how this fundamental shift in thinking impacts creative workflows across all scales and disciplines, spanning from urban planning to fashion, and redefines all stages of the design process, starting with the creation of the object and ending with its fabrication.


Thursday, April 14, 2016

12:00 – 13:00pm in Room 407
Prof. Vladislav Sidorenko, Keldysh Institute of Applied Mathematics
Dr. Dmitry Pritykin, Moscow Institute of Physics and Technology
Dr. Dmitry Yarotsky, Institute for Information Transmission Problems (IITP)

Space tether systems

We will discuss potential applications of space tether systems, including removal of space debris, atmosphere studies, and satellite formation flying. In particular, we will describe a new hub-and-spoke formation which consists of a main body (hub) connected by means of tethers with several deputy satellites. Such a formation can be used for simultaneous 3D-probing of the near-Earth environment; also the collocation with the existing satellite on the geostationary orbit is possible – in this case the hub-and-spoke formation “envelops” it. We demonstrate an opportunity to adjust the parameters of the formation so that its components perform free periodic oscillations without colliding with each other or with the central satellite.


Thursday, March 24, 2016

16:00 – 17:00pm in Room 407 (note temporary date & room change)
Manos Athanassoulis, Harvard University

Designing Access Methods: The RUM Conjecture

The database research community has been building methods to store, access, and update data for more than four decades. Throughout the evolution of the structures and techniques used to access data, access methods adapt to the ever changing hardware and workload requirements. Today, even small changes in the workload or the hardware lead to a redesign of access methods. The need for new designs has been increasing as data generation and workload diversification grow exponentially, and hardware advances introduce increased complexity. New workload requirements are introduced by the emergence of new applications, and data is managed by large systems composed of more and more complex and heterogeneous hardware. As a result, it is increasingly important to develop application-aware and hardware-aware access methods.

The fundamental challenges that every researcher, systems architect, or designer faces when designing a new access method are how to minimize, i) read times (R), ii) update cost (U), and iii) memory (or storage) overhead (M). In this talk, I will first introduce the conjecture that when optimizing the read-update-memory overheads, optimizing in any two areas negatively impacts the third. I will present a simple model of the RUM overheads, and articulate the RUM Conjecture. I will then discuss how the RUM Conjecture manifests in multiple access methods designs and I will present a road-map towards RUM-aware access methods with specific examples.


Thursday, March 10, 2016

12:00 – 13:00pm in Room 148 (note temporary room change)
Roman Suvorov, Federal Research Center “Computer Science and Control”

Data Mining for Medical Research, Web Content Filtering and Scientific Projects Expertise

Roman will talk about his experience with machine learning, natural language processing and data mining techniques and applications, including: web content filtering using image and text classification; retrospective analysis of the scientific project proposals expertise in the Russian Foundation for Basic Research; clinical diagnostics (disease detection) based on structured and textual data from electronic health records; comparative analysis of various cancer treatment methods based on data, automatically extracted from scientific papers; etc.


Thursday, February 18th, 2016

12:00 – 13:00pm in Room 407
Dr Nadezda A Vasilyeva, Moscow State University

Nonlinear Dynamic Soil Aggregation Model

Soil aggregation-mediated biological interactions on micro-scale give rise to macro-patterns of soil regimes. This crucial phenomenon is an interesting case of self-organization in complex systems. In the present study we developed a physically-based mathematical model of soil aggregation considering major known biological feedbacks with soil physical parameters. According to our previous experimental studies, organic matter affinity to water is an important property affecting soil structure. Therefore, organic matter wettability is taken as principle distinction between organic matter types in our model. The mathematical model is formulated as a system of non-linear ordinary differential equations, including reaction kinetics equations for biological and coagulation/adsorption/adhesion processes and Smoluchowski-type equations for aggregation. For parametrization of the model fast algorithm of numerical solution for such type of equations is suggested. The present dynamic soil aggregation model is being developed to include spatial distribution and transport of water, heat and chemical substances, so it can be used to model properties of the soil profile.