Abstract: The local Langlands correspondence (LLC) predicts a deep relationship between representations of a reductive group (G(F)) over a nonarchimedean local field (F) and certain homomorphisms of the local Galois group into the Langlands dual group (\widehat G). A key quantitative invariant on both sides is the depth—measuring, roughly, how far a representation or parameter is from being unramified. However, for general groups, the usual notion of depth fails to be preserved by the LLC.
In this talk, I will explain a revised definition of depth for Langlands parameters that restores depth preservation in full generality, in particular for all tori. This refinement allows one to formulate and prove new structural results connecting representation theory across local fields of different characteristics. Specifically, I will describe a “close-field” correspondence showing that, when the residue characteristic is large, each Bernstein block of (G(F)) in characteristic (p) is canonically equivalent to a corresponding block for a characteristic-zero group (G'(F')) with (F') suitably (\ell)-close to (F). Consequently, harmonic-analytic results proved in characteristic (0) can be transferred verbatim to characteristic (p).
Along the way, we establish several general tools of independent interest: a depth-transfer function generalizing the Hasse–Herbrand function, truncated isomorphisms for arbitrary tori and parahoric subgroups, and a Kazhdan-type Hecke algebra isomorphism preserving both depth and supercuspidality. The talk will emphasize the conceptual ideas behind these constructions rather than technical details.
Abstract: In this research talk, I will present my recent advances in developing Physics-Informed Neural Network (PINN) frameworks for solving problems in both environmental monitoring and biomedical imaging. The first part of the talk focuses on the characterization of dynamic chemical sources in fluid environments. By integrating time-resolved sensor data of concentration, velocity, and pressure with governing transport equations, the proposed PINN model reconstructs the spatio-temporal evolution of unknown emission sources, providing accurate localization and strength estimation even with sparse measurements. Building upon these ideas, the second part of the talk introduces a physics-guided learning framework for predicting hemodynamic quantities in cerebral aneurysms. Traditional CT angiography lacks temporal flow and pressure information; hence, I employ simulated sinogram data combined with the physics of mass transport and blood flow to infer dynamic hemodynamic fields. Together, these studies highlight the potential of scientific machine learning to bridge the gap between data-driven models and physical understanding, enabling interpretable and generalizable predictions across domains ranging from environmental sensing to precision medicine.
Abstract: With ChatGPT becoming a buzzword, various people are wondering about the relevance of mathematics, statistics .....
These questions have been around, only buzz words have changed - Analytics, AIML, Data Science, Big Data...
In this talk, I will explain my view that several mathematical and statistical ideas are as relevant now as they have been in the past, and perhaps even more so since availability of data and computing power is exploding at a remarkable speed and these techniques are impacting every walk of life.
Using historical examples, I will illustrate that incorrect use of the techniques can and have lead to grossly incorrect conclusions.
Many of the tools and techniques developed over centuries may not be relevant any more but the ideas are relevant as well as important.
Abstract: With the aim to give a suitable setting for studying mixed-type equations (e.g. the Tricomi equation) K. O. Friedrichs introduced positive symmetric systems of first order PDEs in 1958, which nowadays are better known as Friedrichs systems. These systems proved to be unified approach for studying various types of PDEs.
We shall cover the introduction, examples and recent operator-theoretic developments of the theory. Finally, we shall discuss some open problems regarding the spectral analysis of such systems.
Abstract: We begin with an introduction to the subconvexity problem for $L$-functions and the delta method, a powerful and versatile tool in this area. As applications, we present two new results. The first is a sub-Weyl bound for $GL(2)$ $L$-functions, marking the first breakthrough beyond the Weyl barrier outside the $GL(1)$ case. The second is a new upper bound for the Riemann zeta function, improving upon the previous record of Bourgain.
The proof relies on a refinement of the so-called "trivial" delta method.
Abstract: With the aim to give a suitable setting for studying mixed-type equations (e.g. the Tricomi equation) K. O. Friedrichs introduced positive symmetric systems of first order PDEs in 1958, which nowadays are better known as Friedrichs systems. These systems proved to be unified approach for studying various types of PDEs.
We shall cover the introduction, examples and recent operator-theoretic developments of the theory. Finally, we shall discuss some open problems regarding the spectral analysis of such systems.
Abstract: Click here
Abstract: Click here
Abstract: For an elliptic equation with singular source subject to Dirichlet boundary condition, the boundary may be interpreted in the usual sense for sufficiently smooth solutions, if it exists. But for general problems we need to interpret boundary data in the weak context where classical solutions cease to exist. In this talk we will present a brief overview of weak solutions to the singular problems and how to interpret the boundary.
Abstract: Classical invariant theory in its present form owes its beginnings to the foundational work of Hermann Weyl. The conceptual framework laid out by Weyl for studying polynomial invariants under the action of groups, which he termed as `classical groups', has bridged algebra and geometry via invariant theory. Building on this framework, Procesi later looked at the conjugation action of the classical groups on the space of $m$-tuples of matrices. This work has since been a blueprint for much of the development in classical invariant theory. In this talk, we revisit this problem and generalize it to the setting of mixed tensor spaces. We illustrate why the invariant rings of mixed tensor spaces are particularly interesting. We then recast this problem to the setting of superspaces and illustrate how Procesi's ideas resonate even as the nature of symmetry evolves in mathematics.
Abstract: In applied statistics, motivated by specific datasets and applications, we build statistical models that satisfy the key features exhibited by the data, followed by studying the model properties theoretically or using simulation studies, to ensure that inferences are valid, and then finally, we apply the developed methodology to answer the underlying research questions. My research mainly focuses on applications related to environmental datasets, covering key climate parameters like temperature, precipitation, wildfire counts and frequencies, sea surface temperature, groundwater pollutants like arsenic or Perfluorooctane sulfonate, etc. Besides, the focus also lies on solving different data challenge competitions in high-dimensional spatial statistics and extreme value analysis, where the organizers provide some datasets and related questions, and the participants need to develop or explore a suitable method to answer those questions highly accurately. Apart from these, some of my papers also deal with changepoints in the extremal behavior of stock markets and population dynamics. In this seminar, I will highlight a project where we aim to estimate population density at a 30mX30m resolution over the Bruhat Bengaluru Mahanagara Palike (BBMP), leading to 0.8 million pixels, based on ward-level population data and some alternative data obtained using satellite imaging and computer vision. I will briefly discuss the 2023 KAUST Large Spatial Data Competition, where our team won first position. I will also discuss the teaching philosophy I explored over seven semesters, receiving the director’s appreciation every semester. With six PhD and several BS/BS-MS and MSc students, several existing and new collaborations, I will finish with the Vision 2030 of our Spatial Statistics Research Group at IIT Kanpur.
Abstract: The representation theory of classical groups has long been central to modern mathematics: Young classified the complex representations of symmetric groups in the early 20th century, and Green determined those of General linear groups over the finite field in 1955. A natural extension is to study groups of the form GL_n(R), where R is a finite principal ideal local ring, such as Z/p^kZ or F_p[t]/(t^k). These groups play a key role in understanding the finite-dimensional continuous representations of GL_n(O), the general linear group over integral points of a non-Archimedean local field, and also arise naturally in the context of representation zeta functions of arithmetic groups. Indeed, questions about the abscissa of convergence of these zeta functions, such as the conjecture of Larsen and Lubotzky for higher-rank lattices, lead directly to the study of GL_n(R).
In this talk, I will discuss the challenges in constructing irreducible representations of GL_n(R), highlighting how the situation differs from the finite field case. I will present a method for constructing irreducible representations of GL_3(R) (joint with Uri Onn and Amritanshu Prasad), which in particular resolves a conjecture of Uri Onn in this setting. I will then turn to the decomposition of Gelfand-Graev modules for GL_n(R). While the non-degenerate modules are known to decompose multiplicity-free, the structure of degenerate modules remains largely open; I will describe some recent progress in this direction, based on joint work with Archita Gupta. I will conclude with some related open problems.
Abstract: Methods of homological algebra invaded intensively the domain of algebraic structures and initiated several revolutions. In our approach, it has been pursued mostly in three fronts through deformation of algebraic structures, categorification (homotopification) and algebraic characterization of geometric structures (Lie algebroids, Courant pairs).
We will begin with some of the main features of deformation theory for algebraic structures initiated by the monumental work by M. Gerstenhaber, its further developments and computational tools for general Loday type algebras (Leibniz algebras, Courant pairs). In the sequel, we will discuss some aspects of categorification (homotopification) for associative algebras with derivations. Lie algebras of derivations are found to be crucial for algebraic characterization of geometric structures in more general setting beyond smooth manifolds. We will consider some applications through Lie-Rinehart algebras and mention about further developments.
Abstract: Vaughan Jones pioneered the theory of subfactors, which studies the relative position of a smaller type II_1 factor inside a larger one. This perspective has had profound consequences, from knot theory to quantum field theory. A natural extension of this framework is to investigate the relative positions of multiple subfactors. However, the general theory quickly becomes intricate. In this direction, Jones initiated a systematic study of pairs of subfactors, which naturally give rise to a quartet of Neumann algebras.
In this talk, I will describe some of our contributions to this ongoing program, including results on the geometry of such quartets. I will also present the corresponding picture in the C*-algebraic setting, highlighting both similarities and differences. If time permits, I will conclude with our recent work on quantum relative entropy for pairs of subfactors.
Abstract: For a given elliptic curve E/Q, a spectacular results towards the Birch and Swinnerton-Dyer conjecture (a major unresolved problem in mathematics) is the Gross-Zagier & Kolyvagin (GKZ) theorem. For a prime p, the p-Selmer group of E/Q is a cohomological tool that gives deep insight into the structure of the rational points E(Q). About a decade ago, Skinner and W. Zhang (independently) began proving (p) converse result to GKZ theorem, using the p-Selmer group of E/Q. Such p-converse theorem has been used in Bhargava et. al’s work on the Birch and Swinnerton-Dye conjecture. On the other hand, a recent result of Bhargava et. al. used 2-Selmer group of Mordell curves and a 2-converse to GKZ theorem by Burungale-Skinner to make significant progress in a classical Diophantine problem related to integers that are expressible as a sum of two rational cubes.
In this talk, we will discuss some results on the rational cube sum problem, p-Selmer group and p-converse to GKZ theorem.
Abstract: Amenability of a group action is a dynamical generalisation of amenability for groups, with interesting applications in geometry and topology. Many (non-amenable) groups, like the Gromov hyperbolic groups, relatively hyperbolic groups, mapping class groups of surfaces and outer automorphism groups of free groups admit amenable actions. In this talk we will define amenable action of a group and outline two constructions of amenable actions for (i) acylindrically hyperbolic groups and (ii) hierarchically hyperbolic groups, which generalise the above classes of groups. This is based on a joint work with Partha Sarathi Ghosh.
Personal Homepage Link- https://presiuniv.ac.in/web/staff.php?staffid=201
Abstract: In this paper we address the problem of constructing a confidence ellipsoid of a multivariate normal mean vector based on a random sample from it. The central issue at hand is the sensitivity of the original data and hence the data cannot be directly used/analyzed. We consider a few perturbations of the original data, namely, noise addition and creation of synthetic data based on the plug-in sampling (PIS) method and the posterior predictive sampling (PPS) method. We review some theoretical results under PIS and PPS which are already available based on both frequentist and Bayesian analysis and derive the necessary results under noise addition. A theoretical comparison of all the methods based on expected volumes of the confidence ellipsoids is provided. A measure of privacy protection (PP) is discussed and its formulas under PIS, PPS and noise addition are derived and the different methods are compared based on PP. Applications include analysis of two multivariate datasets. The first dataset, with p = 2, is obtained from the latest Annual Social and Economic Supplement (ASEC) conducted by the US Census Bureau in 2023. The second dataset, with p = 3, pertains to renal variables obtained from the book by Harris and Boyd (1995). Using a synthetic version of the original data generated through PIS and PPS methods and also the noise added data, we produce and display the confidence ellipsoids for the unknown mean vector under various scenarios. Finally, the privacy protection measure is evaluated for various methods and different features.
Keywords: Bayesian credible set; Confidence ellipsoid; Noise addition; Plug-in sampling; Posterior predictive sampling; Privacy protection Based on joint work with Biswajit Basak and Yehenew Kifle (UMBC)
Brief bio of the speaker:
PhD 1973, University of Calcutta, Calcutta, India
Presidential Research Professor, UMBC
University System of Maryland Board of Regents Research Professor
Fellow, Institute of Mathematical Statistics
Fellow, American Statistical Association
Professor Sinha is the Founder of the Statistics Graduate Program at UMBC. A 1973 PhD in statistics from the University of Calcutta/India, Professor Sinha is an ex-faculty of the Indian Statistical Institute and the University of Pittsburgh. A Professor of Statistics at UMBC since 1985, Professor Sinha's research activities span topics in theoretical and applied statistics, including multivariate analysis, linear models, ranked set sampling, environmental statistics, statistical meta-analysis, and data analysis under confidentiality protection. He has co-edited several volumes, and co-authored four books (John Wiley, Springer, Academic). He is a Fellow of the American Statistical Association and the Institute of Mathematical Statistics, and an elected member of the International Statistical Institute. His research has been funded by the US Environmental Protection Agency for about twenty years.
Professor Sinha's research contribution in the area of environmental statistics has been recognized through a Distinguished Achievement Award from the Environmental Statistics Section of the American Statistical Association. In acknowledgment of his research productivity, Professor Sinha was named a Presidential Research Professor in 2008. Furthermore, he received the University System of Maryland Board of Regents Excellence in Research award in 2012.
Professor Sinha has served on the editorial board of several national and international statistics journals, and mentored 35 PhD students.
Abstract: The journey of neural networks from the 1950s Perceptron to the modern Transformer is a story of continued innovation in architecture, theory, and applications. From the dawn of machine learning through linear classification in the 1950s, to the resurgence in the 1980s with multi-layer perceptrons and backpropagation creating the possibility of deeper architectures, to the 1990s and 2000s with specialization such as convolutional neural networks (CNNs) for vision and recurrent neural networks (RNNs) for sequences, the rise of deep learning frameworks and the associated scalability led to a set of breakthroughs in speech, vision, and natural language processing. Finally, the rise of attention mechanisms and the Transformer model marked the rise of massive parallelization, further redefining the state of the art in language, vision, and multi-modal AI. This is the journey of a shift in focus from biologically inspired models to massively parallel and highly scalable architectures, which brought us to the era of foundation models and generative AI. In this lecture we will discuss the journey of neural networks from 1950s to 2025 and also the challenges and shortcomings associated with that.
Abstract: Gaussian statistics accurately capture the large-scale behaviour of many physical and mathematical systems. However, several important models which have a intricate correlation structure arising from nonlinear interactions between randomness and key observables do not fit into this framework. Despite their differences, these models share striking macroscopic behaviour, leading to the Kardar–Parisi–Zhang (KPZ) universality class, characterized by distinctive scaling exponents, limiting distributions, and critical objects.
A central approach to studying KPZ universality is through exactly solvable models. These models capture relevant physical phenomena and also connect deeply to combinatorics, PDEs, random matrix theory, and representation theory, allowing explicit determinantal formulas for key observables. These models have yielded major breakthroughs and unveiled rich structures in the randomness of such complex systems. I will introduce one such model class—directed polymers in random environments—and go over some results relating to their universality behaviour.
Abstract: Fatou-Bieberbach domains in $\mathbb{C}^k, k \ge 2$ are proper subdomains of $\mathbb{C}^k, k \ge 2$ that are biholomorphic to $\mathbb{C}^k, k \ge 2$. Such domains clearly do not exist in $\mathbb{C}$ and arise naturally from dynamics of automorphisms of $\mathbb{C}^k, k \ge 2$ as attracting basins of fixed points. The goal of this talk is to discuss the relevance of these domains in the context of geometry, complex function theory and holomorphic dynamics. In particular, we will survey a brief list of some connected results arising from the dynamics of Hénon maps (polynomial automorphisms of $\mathbb{C}^2$), which also produces a class of domains called the 'short C2's', an analogue of Fatou-Biebrabach domains.
Further, we will introduce basins of non-autonomous (holomorphic) dynamical models in $\mathbb{C}^k, k \ge 2$, and discuss an affirmative answer to a long standing open problem, called the Bedford's conjecture on a generalized construction of Fatou-Bieberbach domains.
This is a collection of results jointly obtained in collaboration with Ratna Pal and Kaushal Verma.
Abstract: The emergence of Large Language Models (LLMs) has brought in concomitant concerns about the security and reliability of generative AI systems.
While LLMs promise powerful capabilities in diverse real-world applications, ensuring that their outputs are resilient to malicious attacks and consistent across similar inputs has significant methodological and computational challenges. This situation calls for the revisiting of modern deep learning architectures through a statistical lens.
I will present on two interconnected themes in this area. First, I will introduce Representation Noising (RepNoise), a defense mechanism that protects the weights of open-source LLMs against malicious uses. RepNoise achieves this through controlled noise injection in the knowledge representations inside a model that makes it harder to recover harmful information later. Second, I will discuss my work on the consistency problem—the equivalent of robustness in LLMs—concerned with measuring and minimizing the sensitivity of LLM outputs to input variations through a combination of controlled synthetic data generation and fine-tuning.
I will conclude by discussing ongoing work at the intersection of AI security and statistics, including the development of statistical bounds for the strength of defense mechanisms like RepNoise, and robustness frameworks for ensuring AI system reliability in high-stakes applications.
Abstract: In this talk, I'll show that there is a resolution of singularities of a quasitoric orbifold. Then I'll prove that a quasi-contact toric manifold is equivariantly a boundary. This result implies that good contact toric manifolds and generalized lens spaces are equivariantly boundaries. Then I'll compute the equivariant cohomology ring of quasi-contact toric manifolds.
Personal Homepage Link- https://home.iitm.ac.in/soumen/
Abstract: In the first part of this talk, we will explore the theoretical properties of Rectified Flow (RF), a generative model that aims to learn straight flow trajectories from noise to data using a sequence of convex optimization problems with close ties to optimal transport. If the trajectory is curved, one must use many Euler discretization steps or novel strategies, such as exponential integrators, to achieve a satisfactory generation quality. In contrast, RF has been shown to theoretically straighten the trajectory through successive rectifications, reducing the number of function evaluations (NFEs) while sampling. It has also been shown empirically that RF may improve the straightness in two rectifications if one can solve the underlying optimization problem within a sufficiently small error. In this work, we make two contributions.
First, we provide a theoretical analysis of the Wasserstein distance between the sampling distribution of RF and the target distribution. Our error rate is characterized by the number of discretization steps and a novel formulation of straightness stronger than that in the original work.
Secondly, we present general conditions guaranteeing uniqueness and straightness of 1-RF, which is in line with previous empirical findings.
As a byproduct of our analysis, we show that, in one dimension, RF started at the standard Gaussian distribution yields the Monge map. Additionally, we also present empirical results on both simulated and real datasets to validate our theoretical findings.
In the next part, we will discuss our recent work on private online learning in the presence of high-dimensional features. High-dimensional sparse linear bandits serve as an efficient model for sequential decision-making problems (e.g., personalized medicine), where high-dimensional features (e.g., genomic data) on the users are available, but only a small subset of them is relevant. Motivated by data privacy concerns in these applications, we study the joint differentially private high-dimensional sparse linear bandits, where both rewards and contexts are considered as private data. First, to quantify the cost of privacy, we derive a lower bound on the regret achievable in this setting. To further address the problem, we design a computationally efficient bandit algorithm, ForgetfuL Iterative Private HArd Thresholding (FLIPHAT). Along with the doubling of episodes and episodic forgetting, FLIPHAT deploys a variant of Noisy Iterative Hard Thresholding (N-IHT) algorithm as a sparse linear regression oracle to ensure both privacy and regret-optimality. We show that FLIPHAT achieves optimal regret in terms of privacy parameters, context dimension, and time horizon up to a linear factor in model sparsity in the problem-independent case. We analyze the regret by providing a novel refined analysis of the estimation error of N-IHT, which is of parallel interest.
Abstract:
Integrating information across correlated datasets is a central challenge in many contemporary data analysis problems. Despite numerous methods available for this purpose, the lack of clarity regarding their statistical properties poses significant hurdles to achieving robust statistical inference. In this talk, I shall introduce a novel method called Orchestrated Approximate Message Passing for integrating information across multiple correlated datasets. This method is both computationally efficient and statistically optimal under a stylized model, and its asymptotic properties enable users to construct asymptotically valid prediction sets.
Subsequently, I shall describe how to use the technique to construct cell atlases using multi-modal single-cell data and querying these atlases with partial molecular features. Finally, I shall present a technique for constructing prediction sets of the multi-modal spectral embeddings from new cells with only one observed modality, utilizing the atlas.
Abstract: The deformation theory of modular forms is increasingly attracting many researchers in arithmetic geometry as it has been an important step in the proof of Fermat’s last theorem by Wiles (and Taylor) and supplied an effective tool for the study of the p-adic Birch and Swinnerton Dyer conjecture in the proof by Skinner-Urban of divisibility of the characteristic power series of the Selmer group of a rational elliptic curve by its p-adic L-function under appropriate assumptions. I try to give my back-ground motivation of creating the theory and describe an outline of the theory.
Abstract: The membrane model (MM) is a random interface model for separating surfaces that tend to preserve curvature. It is a Gaussian interface whose inverse covariance is given by the discrete bilaplacian operator. It is a very close relative of the discrete Gaussian free field, for which the inverse covariance is given by the discrete Laplacian operator. We consider the MM on the d-dimensional integer lattice. We study its scaling limit using some discrete PDE techniques involving finite difference approximation of elliptic boundary value problems. Also, we discuss the behavior of the maximum of the model. Then we consider the MM on regular trees and investigate a random walk representation for the covariance. Exploiting the random walk representation for the covariance, we determine the behavior of the maximum of the MM on regular trees.
Abstract: Verbal autopsy (VA) algorithms are widely used in low- and middle-income countries (LMICs) to determine individual causes of death (COD), which are then aggregated to estimate population-level mortality crucial for public health policymaking. However, VA algorithms often misclassify COD, leading to biased mortality estimates. A recent method, VA-calibration, aims to correct this bias by incorporating the VA misclassification rate derived from limited labeled COD data collected in the CHAMPS project. Due to limited labeled samples, data are pooled across countries to enhance estimation precision, implicitly assuming uniform misclassification rates.
In this presentation, I will highlight substantial cross-country heterogeneity in VA misclassification. This challenges the homogeneity assumption and increases bias. To address this issue, I propose a comprehensive framework for modeling country-specific misclassification matrices in data-scarce settings. The framework introduces an innovative base model that parsimoniously characterizes the misclassification matrix using two latent mechanisms: intrinsic accuracy and systematic preference.
We establish that these mechanisms are theoretically identifiable from the data and manifest as an invariance in misclassification odds, a pattern observed in CHAMPS data. Building on this, the framework integrates cross-country heterogeneity through interpretable effect sizes and employs shrinkage priors to balance the bias-variance tradeoff in misclassification matrix estimation. This enhances the applicability of VA-calibration and strengthens ongoing efforts to leverage VA for mortality surveillance. I will illustrate these advancements through applications to projects such as COMSA in Mozambique and CA CODE.
Abstract: In this talk, we discuss a scale invariant Harnack inequality for some non-homogeneous parabolic equations in a suitable intrinsic geometry dictated by the nonlinearity, which, in particular, implies the Hölder continuity. We also discuss a Harnack type estimate on a global scale which quantifies the strong minimum principle.
This talk is based on a joint work with Vesa Julin.
Abstract: A significant achievement in modern mathematics has been the classification of characters of finite groups of Lie type. This classification comes from Deligne-Lusztig theory and Lusztig's Jordan decomposition. The latter, inspired by the classical matrix decomposition, allows us to factorize characters into "semisimple" and "unipotent" components, greatly simplifying their study.
In this talk, we will talk about the construction of a unique Jordan decomposition of characters for arbitrary connected reductive groups. This result substantially extends the previous framework established by Digne-Michel, which was limited to groups with connected centers.
Our approach differs from earlier attempts by constructing the Jordan decomposition one Harish-Chandra series at a time. The key insight comes from establishing isomorphisms between endomorphism algebras associated with cuspidal characters and those of their unipotent counterpart respectively.
This construction has several significant consequences. It systematically allows us to reduce many representation-theoretic problems to their unipotent counterparts. We will demonstrate it in a couple of widely studied problems - namely Frobenius Schur indicators and Dualizing involutions. It also resolves the Commutation Problem, which had been open in the subject for a while.
We will discuss how this decomposition interacts with questions in the representation theory of p-adic groups. This is a joint work with Prashant Arote.
Abstract: Click Here
Abstract: Importance sampling (IS) is an elegant, theoretically sound, flexible, and simple-to-understand methodology for approximation of intractable integrals and probability distributions. The only requirement is the point-wise evaluation of the targeted distribution. The basic mechanism of IS consists of (a) drawing samples from simple proposal densities, (b) weighting the samples by accounting for the mismatch between the targeted and the proposal densities, and (c) approximating the moments of interest with the weighted samples. The performance of IS methods directly depends on the choice of the proposal functions. For that reason, the proposals have to be updated and improved with iterations so that samples are generated in regions of interest. In this talk, we will first introduce the basics of IS and multiple IS (MIS), motivating the need to use several proposal densities. Then, the focus will be on motivating the use of adaptive IS (AIS) algorithms, describing an encompassing framework of recent methods in the current literature. Finally, we review the problem of combining Monte Carlo estimators in the context of MIS and AIS.
Abstract: J&J is one of the biggest names in oncology drug development and related research. In first half of my talk, I’ll talk about breadth and width of oncology research in J&J. I am going to go through a little bit about what a statistician does in their day-to-day job in clinical research industry. In the second half, I’ll go over a couple of real-life applications of statistics to meet research requirements in this process of drug development. These two examples are around a broad area of causal inference, which is a useful tool being used very frequently to answer some crucial questions in clinical research.
Abstract: The Ihara zeta function of a graph has many properties analogous to the Riemann zeta function, and is conceptually simpler to understand. We will prove the Ihara-Bass determinant formula for the zeta function, and calculate it in special cases. Some functional relations of zeta functions for regular graphs will also be discussed. A remarkable fact is that for regular graphs, the graph theory Riemann hypothesis holds if and only if the graph is a Ramanujan graph. If time permits, covering spaces of graphs and divisibility properties of their zeta and L functions and will be considered.
Abstract: Based on the analogy between number and function rings, delta geometry was developed by A. Buium where for a fixed prime $p$, the notion of a $p$-derivation $\delta$ plays the role of 'differentiation' for number rings. Such a $p$-derivation $\delta$ comes from the $p$-typical Witt vectors and as a result, delta geometry naturally encodes valuable arithmetic information, especially those that pertain to lifts of Frobenius.
For an abelian scheme, using the theory of delta geometry, one canonically attaches a filtered isocrystal that bears a natural map to the crystalline cohomology. In this talk, we will discuss some comparison results that explain its relation to crystalline cohomology.
Abstract: We study graphical models for cardinal paired comparison data with and without covariates. Novel, graph–based, necessary and sufficient conditions which guarantee strong consistency, asymptotic normality and the exponential convergence of the estimated ranks are emphasized. A complete theory for models with covariates is laid out. In particular conditions under which covariates can be safely omitted from the model are provided. The methodology is employed in the analysis of both finite and infinite sets of ranked items specifically in the case of large sparse comparison graphs. The proposed methods are explored by simulation and applied to the ranking of teams in the National Basketball Association (NBA).
Abstract: The Hopf map is a continuous map from the $3$-sphere to the $2$-sphere, exhibiting a many-to-one relationship, where each unique point on the $2$-sphere originates from a distinct great circle on the $3$-sphere. This mapping is instrumental in generating the third homotopy group of the $2$-sphere. In this talk, I will present a minimal pseudo-triangulation of the Hopf map and establish its uniqueness. Additionally, I will show that the pseudo-triangulation corresponding to the $3$-sphere is susceptible to a $4$-coloring.
Abstract: In this talk, we characterize normal $3$-pseudomanifolds \( K \) with \( g_2(K) \leq 4 \). It is known that if a normal $3$-pseudomanifold \( K \) with \( g_2(K) \leq 4 \) has no singular vertices, then it is a triangulated $3$-sphere. We first prove that a normal $3$-pseudomanifold \( K \) with \( g_2(K) \leq 4 \) has at most two singular vertices. Subsequently, we show that if \( K \) is not a triangulated $3$-sphere, it can be obtained from certain boundary complexes of $4$-simplices by a sequence of operations, including connected sums, edge expansions, and edge folding. Furthermore, we establish that such a $3$-pseudomanifold \( K \) is a triangulation of the suspension of \( \mathbb{RP}^2 \). Additionally, by building upon the results of Walkup, we provide a reframed characterization of normal $3$-pseudomanifolds with no singular vertices for \( g_2(K) \leq 9 \).
Abstract: In this seminar, I shall discuss several estimators of finite population mean, when the data are infinite dimensional in nature. The performance of these estimators will be compared under different sampling designs and superpopulations satisfying linear models based on their asymptotic distributions. One of the major findings is that although the use of the auxiliary information in the estimation stage usually improves the performance of different estimators, the use of the auxiliary information in the sampling design stage often has adverse effects on the performance of these estimators. This seminar is based on a joint research work with my Ph.D. supervisor Prof. Probal Chaudhuri.
Abstract: The development of structure-preserving time integrators has been a major focus of numerical analysis for the last few decades. In the first part of my presentation, I will discuss relaxation Runge-Kutta (RK) methods, designed to preserve essential conserved quantities during time integration. I will first demonstrate how a slight modification of RK methods can be employed to conserve a single nonlinear invariant.
Subsequently, I will introduce the generalization of the relaxation approach for RK methods to conserve multiple nonlinear invariants in a dynamical system. The significance of preserving invariants and its impact on long-term error growth will be illustrated through numerical examples.
In the second part, I will address another crucial challenge in high-order time integration encountered by RK methods: the phenomenon of order reduction in RK methods applied to stiff problems, along with its remedy.
I will first illustrate this issue in RK methods and then introduce the remedy through high Weak Stage Order (WSO), capable of alleviating order reduction in linear problems with time-independent operators.
Additionally, I will briefly discuss stiff order conditions, which are more general and can eliminate order reduction for a broader class of problems, specifically semilinear problems. This extension is essential to overcome the limitations of WSO, which primarily focuses on linear problems.
Abstract: Modern biological studies often involve large-scale hypothesis testing problems, where hypotheses are organized in a Directed Acyclic Graph (DAG). It has been established through widespread research that prior structural information can play a vital role in improving the power of classical multiple testing procedures and in obtaining valid and meaningful inference. In a DAG, each node represents a hypothesis, and the edges denote a logical sequence of relationships among these hypotheses that must be taken into account by a multiple testing procedure. A hypothesis rejected by the testing procedure should also result in the rejection of all its ancestors; we term this a "legitimate rejection." We propose an intuitive approach that applies a Benjamini-Hochberg type procedure on the DAG, and filters the set of rejected hypotheses to eliminate all illegitimate rejections. Additionally, we introduce a weighted version of this procedure, where each p-value is assigned a weight proportional to the number of non-null hypotheses within the group(s) defined by its parent node(s). This approach facilitates easier rejection of p-values in groups predominantly containing non-null hypotheses, while harder rejection is applied to pvalues in groups with mostly null hypotheses. Our unweighted and weighted methods respectively simplify to the Benjamini-Hochberg procedure and the Storey-type Adaptive Benjamini-Hochberg procedure when the DAG is edge-free. Our methods are proven to control the False Discovery Rate (FDR) when applied to independent p-values. The unweighted method also control FDR for PRDS pvalues. Simulation studies confirm that the weighted data-adaptive version of our method also maintain similar FDR control, albeit under certain conditions. Our simulation studies further elucidate the scenarios where our proposed methods are more powerful than their competitors. This is a joint work with Dr. Marina Bogomolov, Technion -Israel Institute of Technology.
Abstract: Hypothesis testing problems are fundamental to the theory and practice of statistics. It is well known that when the union of the null and the alternative does not encompass the full parameter space the possibility of a Type III error arises, i.e., the null hypothesis may be rejected when neither the null nor the alternative are true. In such situations, common in the context of order restricted inference, the validity of our inferences may be severely compromised. The study of the geometry of the distance--test, a test widely used in constrained inference, illuminates circumstances in which Type III errors arise and motivates the introduction of \emph{safe tests}. Heuristically, a safe test is a test which, at least asymptotically, is free of Type III errors.
A novel safe test is proposed and studied. The new testing procedure is associated with a \emph{certificate of validity}, a pre--test indicating whether the original hypotheses are consistent with the data.
Consequently, Type III errors can be addressed in a principled way and constrained tests can be carried out without fear of systematically incorrect inferences. Although we focus on testing problems arising in order restricted inference the underlying ideas are more broadly applicable. The benefits associated with the proposed methodology are demonstrated by simulations and the analysis of several illustrative examples.