The Basel III regulation raised the minimum capital requirements for banks. However, its implementation may not have reduced systemic risks. Academicians investigating optimal banking regulations do not have a consensus on whether to increase or decrease capital requirements. Here, we use the agent-based approach to study capital regulation and its implications on the evolution of the banking system. We chose the Russian banking system to proxy key model parameters. We find that lower capital requirements imply higher financial stability than the Basel III regime, where the regulator requires banks to have capital over 10% of its risk-weighted assets’ amount. However, the regulatory rule to merely have a non-negative capital is the simplest solution that best fits heterogeneous economies. It produces the highest ratio of capital to assets, the least number of bank bankruptcies, and the lowest demand of banks to enter the interbank market to cover liquidity problems for all systems.
The problem of community detection in a network with features at its nodes takes into account both the graph structure and node features. The goal is to find relatively dense groups of interconnected entities sharing some features in common. Algorithms based on probabilistic community models require the node features to be categorical. We use a data-driven model by combining the least-squares data recovery criteria for both, the graph structure and node features. This allows us to take into account both quantitative and categorical features. After deriving an equivalent complementary criterion to optimize, we apply a greedy-wise algorithm for detecting communities in sequence. We experimentally show that our proposed method is effective on both real-world data and synthetic data. In the cases at which attributes are categorical, we compare our approach with state-of-the-art algorithms. Our algorithm appears competitive against them.
NP -hard scheduling problems with the criterion of minimizing the maximum penalty, e.g. maximum lateness, are considered. For such problems, a metric which delivers an upper bound on the absolute error of the objective function value is introduced. Taking the given instance of some problem and using the introduced metric, the nearest instance is deter- mined for which a polynomial or pseudo-polynomial algorithm is known. A schedule is constructed for this determined instance which is then applied to the original instance. It is shown how this approach can be applied to different scheduling problems.
We propose an algorithm for linearizing systems of partial differential equations at constant solutions. The algorithm is based on an isomorphism constructed between the ring of linearized functions and the ring of special matrices, which makes it possible to simplify calculations in the process of linearization. The algorithm is illustrated by applying it to the quasigasdynamic system.
This paper presents a novel combinatorial approach for voting rule analysis. Applying reversal symmetry, we introduce a new class of preference profiles and a new representation (bracelet representation) of preference profiles. By applying an impartial, anonymous, and neutral culture model for the case of three alternatives, we obtain precise theoretical values for the number of election scores for the plurality rule, the Kemeny rule, the Borda rule, and the scoring rules in the extreme case.
Usually, DEA methods are used for the assessment of a region’s disaster vulnerability. However, most of these methods work with precise values of all the characteristics of the regions. At the same time, in real life, quite often most of the data consists of expert estimates or approximate values. In this regard, we propose to use modified DEA methods, which will take into account inaccuracy of the data. We apply these methods to the evaluation of wildfire preventive measures in the Russian Federation regions.
Essays by and in Honor of William Gehrlein and Dominique Lepelley
Presents recent research on the analysis of voting rules using the probability approach
This book includes up-to-date contributions in the broadly defined area of probabilistic analysis of voting rules and decision mechanisms. Featuring papers from all fields of social choice and game theory, it presents probability arguments to allow readers to gain a better understanding of the properties of decision rules and of the functioning of modern democracies. In particular, it focuses on the legacy of William Gehrlein and Dominique Lepelley, two prominent scholars who have made important contributions to this field over the last fifty years. It covers a range of topics, including (but not limited to) computational and technical aspects of probability approaches, evaluation of the likelihood of voting paradoxes, power indices, empirical evaluations of voting rules, models of voters’ behavior, and strategic voting. The book gathers articles written in honor of Gehrlein and Lepelley along with original works written by the two scholars themselves.
The BIS indicated in July 2020 an unprecedented rise in default risk correlation as a result of pandemics-induced credit risks’ accumulation. A third of the world banking assets credit risk measurement depends on the Basel internal-ratings-based (IRB) models. To ensure financial stability, we wish IRB models to be accurate in default probability (PD) forecasting. There naturally arises a question of which model may be deemed accurate if the data demonstrates the presence of the default correlation. The existing prudential IRB validation guidelines suggest a confidence interval of up to 100 percentage points’ length for such a case. Such an interval is useless as any model and any PD forecast seem accurate. The novelty of this paper is the justification for the use of twin confidence intervals to validate PD model accuracy. Those intervals more concentrate around the two extremes (default and its absence), the higher the default correlation is.
We study explicit two-level finite-difference schemes on staggered meshes for two known regularizations of 1D barotropic gas dynamics equations including schemes with discretizations in x that possess the dissipativity property with respect to the total energy. We derive criterions of L^2-dissipativity in the Cauchy problem for their linearizations at a constant solution with zero background velocity. We compare the criterions for schemes on non-staggered and staggered meshes. Also we consider the case of 1D Navier-Stokes equations without artificial viscosity coefficient. To analyze the case of the 1D Navier-Stokes-Cahn-Hilliard equations, we derive and verify the criterions for L^2-dissipativity and stability for an explicit finite-difference scheme approximating a non-stationary 4th-order in x equation that includes a 2nd-order term in x. The obtained criteria may be useful to compute flows at small Mach numbers.
Key words: -dissipativity, explicit finite-difference schemes, staggered meshes, gas dynamics equations, Navier-Stokes-Cahn-Hilliard equations.
We explore a doubly-greedy approach to the issue of community detection in feature-rich networks. According to this approach, both the network and feature data are straightfor- wardly recovered from the underlying unknown non-overlapping communities, supplied with a center in the feature space and intensity weight(s) over the network each. Our least- squares additive criterion allows us to search for communities one-by-one and to find each community by adding entities one by one. A focus of this paper is that the feature-space data part is converted into a similarity matrix format. The similarity/link values can be used in either of two modes: (a) as measured in the same scale so that one may can meaningfully compare and sum similarity values across the entire similarity matrix (summability mode), and (b) similarity values in one column should not be compared with the values in other columns (nonsummability mode). The two input matrices and two modes lead us to developing four different Iterative Community Extraction from Similarity data (ICESi) algorithms, which determine the number of communities automatically. Our experiments at real-world and synthetic datasets show that these algorithms are valid and competitive.
The COVID-19 induced the central bankers to search the most efficient stimulus measures. As a solution, they made an unprecedented step. They lifted down the reserve requirement (RR) to zero. This was done in the United States [FRS. 2020. “Federal Reserve Actions to Support the Flow of Credit to Households and Businesses.” Accessed February 10, 2021. Board of the Governors of the Federal Reserve System] and Morocco [BKAM. 2020. “Monetary Policy Report No. 55.” Accessed from Central Bank of Morocco Website]. The existing monetary theory literature suggests that the broad money supply should go to infinity as a result. Then we may expect the rapid economic recovery. However, this may not come true. The novelty of this paper is the development of the money multiplier theory. We explain why a step to set the RR at zero may boost (though slight) the cash-intensive economy (like Morocco) and may not deliver any benefit to a mostly cashless one (like the US, Canada, or the EU).
Restricted domains over voter preferences have been extensively studied within the area of computational social choice, initially for preferences that are total orders over the set of alternatives and subsequently for preferences that are dichotomous—i.e., that correspond to approved and disapproved alternatives. This paper contributes to the latter stream of work in a twofold manner. First, we obtain forbidden subprofile characterisations for various important dichotomous domains. Then, we are concerned with incomplete profiles that may arise in many real-world scenarios, where we have partial information about the voters’ preferences. We tackle the problem of determining whether an incomplete profile admits a completion within a certain restricted domain and design constructive, polynomial algorithms to that effect.
This work is devoted to the methodology for identifying structurally close objects of the type “country_year” based on a system of indicators characterizing the state capacity 1996–2015. A comparison of clustering methods (including hierarchical clustering) with methods of analyzing patterns based on a pairwise comparison of indicators, ordinal-fixed and ordinal-invariant pattern clustering, is proposed. The possibility of sharing the methods of clustering and pattern analysis to obtain interpretable results from the point of view of political science is demonstrated. Groups of countries with similar development paths by reference years on the basis of a dynamic analysis of patterns are identified. The dynamic change in state capacity (from the point of view of the selected indicator system) of 166 countries of the world is determined.
Tobacco use is a known risk factor for premature mortality. This paper presents the estimates of smoking-related mortality in Russia in 2019 for 27 causes of death for smokers and ex-smokers by gender and five-year age groups.
Smoking prevalence in Russia by sex and age and share of former smokers who quitted 10 or less years ago was obtained from RLMS HSE. The causes of death associated with current and past smoking, as well as the relative risk (RR) values were obtained from systematic reviews and cohort studies. The estimates of the contribution of current and past smoking for each cause of death by sex and five-year age groups, was made using the formula of the population attributable fraction (PAF) in its multilevel form. Rosstat data on the number of deaths by causes of death, gender and five-year age groups in Russia in 2019 was obtained from the database of the Federal Research Institute for Health Organization and Informatics of the Ministry of Health of the Russian Federation.
The structure and differences in smoking-related mortality by cause of death and age, gender was calculated.
According to our calculations, more than 266 thousand people died due to smoking in Russia in 2019, including 226 thousand men and 40 thousand women. 58% of these deaths were from cardiovascular diseases. These calculations make it possible to estimate the structure of smoking-related mortality by causes of death, gender and age in modern Russia, and to set a benchmark for tobacco-control policy development.
Gender imbalance in different professions is also a consequence of the disparity between men and women in certain areas of education. Based on the data on those enrolled in Russian universities in 2020, the proportion of men and women in all majors in the budgetary, paid and general enrollment was calculated.
The indicators of regional sports development in the Russian Federation are analyzed to find regions with a similar sports development strategy (according to the chosen methodology and measures of closeness) and to identify dynamic groups in a four-year period. Some clustering and pattern analysis methods are described, and their use in the study is validated. The results obtained by classical clustering and ordinal-invariant pattern clustering methods are compared. The main state programs in the field of sports in the Russian Federation are highlighted and analyzed. The key aspects and problems of the state regulation of sports activities in the Russian Federation are indicated. Some ways for improving the existing regulatory and legal acts based on the dynamic analysis of regional patterns are proposed.
After the 2020 pandemics the uncollateralized consumer lending started growing as much as at the pre-crisis times. Bank of Russia is responsible for the overall financial stability. To curb the emerging risks, it again activated the destimulating macroprudential measures (risk-weight add-ons) and expects to obtain the right to implement the prohibiting measures. To further use the two groups of measures the regulator has to know the efficiency of this measures. Conventional approaches of the Bank of International Settlements (BIS) and the difference-in-difference deliver poorly interpretable results. Principally this is because of the fact that they do not account for the complex process of measures implementation (including its multistep nature) and the banks’ reactions to those ones. That is why we need to modify the difference-in-differences approach. Due to such modification, we are able to trace the scope of efficient measures’ application. 10% of banks with the proportion of the consumer loans to assets in excess of 20% reduce such a share by 0.3 pp. per quarter for each 100 pp. risk-weight add-on starting from the measure announcement date. 70% of banks with the share of consumer loans above 1.5% of assets tend to decrease overall lending pace by 2-6 pp. per quarter per each 100 pp. of the risk-weight add-on starting from the measure application date.