Abstract: The membrane model (MM) is a random interface model for separating surfaces that tend to preserve curvature. It is a Gaussian interface whose inverse covariance is given by the discrete bilaplacian operator. It is a very close relative of the discrete Gaussian free field, for which the inverse covariance is given by the discrete Laplacian operator. We consider the MM on the d-dimensional integer lattice. We study its scaling limit using some discrete PDE techniques involving finite difference approximation of elliptic boundary value problems. Also, we discuss the behavior of the maximum of the model. Then we consider the MM on regular trees and investigate a random walk representation for the covariance. Exploiting the random walk representation for the covariance, we determine the behavior of the maximum of the MM on regular trees.
Abstract: Verbal autopsy (VA) algorithms are widely used in low- and middle-income countries (LMICs) to determine individual causes of death (COD), which are then aggregated to estimate population-level mortality crucial for public health policymaking. However, VA algorithms often misclassify COD, leading to biased mortality estimates. A recent method, VA-calibration, aims to correct this bias by incorporating the VA misclassification rate derived from limited labeled COD data collected in the CHAMPS project. Due to limited labeled samples, data are pooled across countries to enhance estimation precision, implicitly assuming uniform misclassification rates.
In this presentation, I will highlight substantial cross-country heterogeneity in VA misclassification. This challenges the homogeneity assumption and increases bias. To address this issue, I propose a comprehensive framework for modeling country-specific misclassification matrices in data-scarce settings. The framework introduces an innovative base model that parsimoniously characterizes the misclassification matrix using two latent mechanisms: intrinsic accuracy and systematic preference.
We establish that these mechanisms are theoretically identifiable from the data and manifest as an invariance in misclassification odds, a pattern observed in CHAMPS data. Building on this, the framework integrates cross-country heterogeneity through interpretable effect sizes and employs shrinkage priors to balance the bias-variance tradeoff in misclassification matrix estimation. This enhances the applicability of VA-calibration and strengthens ongoing efforts to leverage VA for mortality surveillance. I will illustrate these advancements through applications to projects such as COMSA in Mozambique and CA CODE.
Abstract: In this talk, we discuss a scale invariant Harnack inequality for some non-homogeneous parabolic equations in a suitable intrinsic geometry dictated by the nonlinearity, which, in particular, implies the Hölder continuity. We also discuss a Harnack type estimate on a global scale which quantifies the strong minimum principle.
This talk is based on a joint work with Vesa Julin.
Abstract: A significant achievement in modern mathematics has been the classification of characters of finite groups of Lie type. This classification comes from Deligne-Lusztig theory and Lusztig's Jordan decomposition. The latter, inspired by the classical matrix decomposition, allows us to factorize characters into "semisimple" and "unipotent" components, greatly simplifying their study.
In this talk, we will talk about the construction of a unique Jordan decomposition of characters for arbitrary connected reductive groups. This result substantially extends the previous framework established by Digne-Michel, which was limited to groups with connected centers.
Our approach differs from earlier attempts by constructing the Jordan decomposition one Harish-Chandra series at a time. The key insight comes from establishing isomorphisms between endomorphism algebras associated with cuspidal characters and those of their unipotent counterpart respectively.
This construction has several significant consequences. It systematically allows us to reduce many representation-theoretic problems to their unipotent counterparts. We will demonstrate it in a couple of widely studied problems - namely Frobenius Schur indicators and Dualizing involutions. It also resolves the Commutation Problem, which had been open in the subject for a while.
We will discuss how this decomposition interacts with questions in the representation theory of p-adic groups. This is a joint work with Prashant Arote.
Abstract: Click Here
Abstract: Importance sampling (IS) is an elegant, theoretically sound, flexible, and simple-to-understand methodology for approximation of intractable integrals and probability distributions. The only requirement is the point-wise evaluation of the targeted distribution. The basic mechanism of IS consists of (a) drawing samples from simple proposal densities, (b) weighting the samples by accounting for the mismatch between the targeted and the proposal densities, and (c) approximating the moments of interest with the weighted samples. The performance of IS methods directly depends on the choice of the proposal functions. For that reason, the proposals have to be updated and improved with iterations so that samples are generated in regions of interest. In this talk, we will first introduce the basics of IS and multiple IS (MIS), motivating the need to use several proposal densities. Then, the focus will be on motivating the use of adaptive IS (AIS) algorithms, describing an encompassing framework of recent methods in the current literature. Finally, we review the problem of combining Monte Carlo estimators in the context of MIS and AIS.
Abstract: J&J is one of the biggest names in oncology drug development and related research. In first half of my talk, I’ll talk about breadth and width of oncology research in J&J. I am going to go through a little bit about what a statistician does in their day-to-day job in clinical research industry. In the second half, I’ll go over a couple of real-life applications of statistics to meet research requirements in this process of drug development. These two examples are around a broad area of causal inference, which is a useful tool being used very frequently to answer some crucial questions in clinical research.
Abstract: The Ihara zeta function of a graph has many properties analogous to the Riemann zeta function, and is conceptually simpler to understand. We will prove the Ihara-Bass determinant formula for the zeta function, and calculate it in special cases. Some functional relations of zeta functions for regular graphs will also be discussed. A remarkable fact is that for regular graphs, the graph theory Riemann hypothesis holds if and only if the graph is a Ramanujan graph. If time permits, covering spaces of graphs and divisibility properties of their zeta and L functions and will be considered.
Abstract: Based on the analogy between number and function rings, delta geometry was developed by A. Buium where for a fixed prime $p$, the notion of a $p$-derivation $\delta$ plays the role of 'differentiation' for number rings. Such a $p$-derivation $\delta$ comes from the $p$-typical Witt vectors and as a result, delta geometry naturally encodes valuable arithmetic information, especially those that pertain to lifts of Frobenius.
For an abelian scheme, using the theory of delta geometry, one canonically attaches a filtered isocrystal that bears a natural map to the crystalline cohomology. In this talk, we will discuss some comparison results that explain its relation to crystalline cohomology.
Abstract: We study graphical models for cardinal paired comparison data with and without covariates. Novel, graph–based, necessary and sufficient conditions which guarantee strong consistency, asymptotic normality and the exponential convergence of the estimated ranks are emphasized. A complete theory for models with covariates is laid out. In particular conditions under which covariates can be safely omitted from the model are provided. The methodology is employed in the analysis of both finite and infinite sets of ranked items specifically in the case of large sparse comparison graphs. The proposed methods are explored by simulation and applied to the ranking of teams in the National Basketball Association (NBA).
Abstract: The Hopf map is a continuous map from the $3$-sphere to the $2$-sphere, exhibiting a many-to-one relationship, where each unique point on the $2$-sphere originates from a distinct great circle on the $3$-sphere. This mapping is instrumental in generating the third homotopy group of the $2$-sphere. In this talk, I will present a minimal pseudo-triangulation of the Hopf map and establish its uniqueness. Additionally, I will show that the pseudo-triangulation corresponding to the $3$-sphere is susceptible to a $4$-coloring.
Abstract: In this talk, we characterize normal $3$-pseudomanifolds \( K \) with \( g_2(K) \leq 4 \). It is known that if a normal $3$-pseudomanifold \( K \) with \( g_2(K) \leq 4 \) has no singular vertices, then it is a triangulated $3$-sphere. We first prove that a normal $3$-pseudomanifold \( K \) with \( g_2(K) \leq 4 \) has at most two singular vertices. Subsequently, we show that if \( K \) is not a triangulated $3$-sphere, it can be obtained from certain boundary complexes of $4$-simplices by a sequence of operations, including connected sums, edge expansions, and edge folding. Furthermore, we establish that such a $3$-pseudomanifold \( K \) is a triangulation of the suspension of \( \mathbb{RP}^2 \). Additionally, by building upon the results of Walkup, we provide a reframed characterization of normal $3$-pseudomanifolds with no singular vertices for \( g_2(K) \leq 9 \).
Abstract: In this seminar, I shall discuss several estimators of finite population mean, when the data are infinite dimensional in nature. The performance of these estimators will be compared under different sampling designs and superpopulations satisfying linear models based on their asymptotic distributions. One of the major findings is that although the use of the auxiliary information in the estimation stage usually improves the performance of different estimators, the use of the auxiliary information in the sampling design stage often has adverse effects on the performance of these estimators. This seminar is based on a joint research work with my Ph.D. supervisor Prof. Probal Chaudhuri.
Abstract: The development of structure-preserving time integrators has been a major focus of numerical analysis for the last few decades. In the first part of my presentation, I will discuss relaxation Runge-Kutta (RK) methods, designed to preserve essential conserved quantities during time integration. I will first demonstrate how a slight modification of RK methods can be employed to conserve a single nonlinear invariant.
Subsequently, I will introduce the generalization of the relaxation approach for RK methods to conserve multiple nonlinear invariants in a dynamical system. The significance of preserving invariants and its impact on long-term error growth will be illustrated through numerical examples.
In the second part, I will address another crucial challenge in high-order time integration encountered by RK methods: the phenomenon of order reduction in RK methods applied to stiff problems, along with its remedy.
I will first illustrate this issue in RK methods and then introduce the remedy through high Weak Stage Order (WSO), capable of alleviating order reduction in linear problems with time-independent operators.
Additionally, I will briefly discuss stiff order conditions, which are more general and can eliminate order reduction for a broader class of problems, specifically semilinear problems. This extension is essential to overcome the limitations of WSO, which primarily focuses on linear problems.
Abstract: Modern biological studies often involve large-scale hypothesis testing problems, where hypotheses are organized in a Directed Acyclic Graph (DAG). It has been established through widespread research that prior structural information can play a vital role in improving the power of classical multiple testing procedures and in obtaining valid and meaningful inference. In a DAG, each node represents a hypothesis, and the edges denote a logical sequence of relationships among these hypotheses that must be taken into account by a multiple testing procedure. A hypothesis rejected by the testing procedure should also result in the rejection of all its ancestors; we term this a "legitimate rejection." We propose an intuitive approach that applies a Benjamini-Hochberg type procedure on the DAG, and filters the set of rejected hypotheses to eliminate all illegitimate rejections. Additionally, we introduce a weighted version of this procedure, where each p-value is assigned a weight proportional to the number of non-null hypotheses within the group(s) defined by its parent node(s). This approach facilitates easier rejection of p-values in groups predominantly containing non-null hypotheses, while harder rejection is applied to pvalues in groups with mostly null hypotheses. Our unweighted and weighted methods respectively simplify to the Benjamini-Hochberg procedure and the Storey-type Adaptive Benjamini-Hochberg procedure when the DAG is edge-free. Our methods are proven to control the False Discovery Rate (FDR) when applied to independent p-values. The unweighted method also control FDR for PRDS pvalues. Simulation studies confirm that the weighted data-adaptive version of our method also maintain similar FDR control, albeit under certain conditions. Our simulation studies further elucidate the scenarios where our proposed methods are more powerful than their competitors. This is a joint work with Dr. Marina Bogomolov, Technion -Israel Institute of Technology.
Abstract: Hypothesis testing problems are fundamental to the theory and practice of statistics. It is well known that when the union of the null and the alternative does not encompass the full parameter space the possibility of a Type III error arises, i.e., the null hypothesis may be rejected when neither the null nor the alternative are true. In such situations, common in the context of order restricted inference, the validity of our inferences may be severely compromised. The study of the geometry of the distance--test, a test widely used in constrained inference, illuminates circumstances in which Type III errors arise and motivates the introduction of \emph{safe tests}. Heuristically, a safe test is a test which, at least asymptotically, is free of Type III errors.
A novel safe test is proposed and studied. The new testing procedure is associated with a \emph{certificate of validity}, a pre--test indicating whether the original hypotheses are consistent with the data.
Consequently, Type III errors can be addressed in a principled way and constrained tests can be carried out without fear of systematically incorrect inferences. Although we focus on testing problems arising in order restricted inference the underlying ideas are more broadly applicable. The benefits associated with the proposed methodology are demonstrated by simulations and the analysis of several illustrative examples.