This study introduces a novel Groundwater Flooding Risk Assessment (GFRA) model to evaluate risks associated with groundwater flooding (GF), a globally significant hazard often overshadowed by surface water flooding. GFRA utilizes a conditional probability function considering critical factors, including topography, ground slope, and land use-recharge to generate a risk assessment map. Additionally, the study evaluates the return period of GF events (GFRP) by fitting annual maxima of groundwater levels to probability distribution functions (PDFs). Approximately 57% of the pilot area falls within high and critical GF risk categories, encompassing residential and recreational areas. Urban sectors in the north and east, containing private buildings, public centers, and industrial structures, exhibit high risk, while developing areas and agricultural lands show low to moderate risk. This serves as an early warning for urban development policies. The Generalized Extreme Value (GEV) distribution effectively captures groundwater level fluctuations. According to the GFRP model, about 21% of the area, predominantly in the city’s northeast, has over 50% probability of GF exceedance (1 to 2-year return period). Urban outskirts show higher return values (> 10 years). The model’s predictions align with recorded flood events (90% correspondence). This approach offers valuable insights into GF threats for vulnerable locations and aids proactive planning and management to enhance urban resilience and sustainability.
The expanding adoption of artificial intelligence systems across high-impact sectors has catalyzed concerns regarding inherent biases and discrimination, leading to calls for greater transparency and accountability. Algorithm auditing has emerged as a pivotal method to assess fairness and mitigate risks in applied machine learning models. This systematic literature review comprehensively analyzes contemporary techniques for auditing the biases of black-box AI systems beyond traditional software testing approaches. An extensive search across technology, law, and social sciences publications identified 22 recent studies exemplifying innovations in quantitative benchmarking, model inspections, adversarial evaluations, and participatory engagements situated in applied contexts like clinical predictions, lending decisions, and employment screenings. A rigorous analytical lens spotlighted considerable limitations in current approaches, including predominant technical orientations divorced from lived realities, lack of transparent value deliberations, overwhelming reliance on one-shot assessments, scarce participation of affected communities, and limited corrective actions instituted in response to audits. At the same time, directions like subsidiarity analyses, human-cent
Copyright © by EnPress Publisher. All rights reserved.