Moh Khalid Hasan1, Md. Osman Ali2, Ke Wang2, Sijia Yang3, Wei Chen4, 1 James Madison University, USA,2RMIT University, Australia ,3 USTB, China,4 SIAT, China
Optical camera communication (OCC) is considered as a key enabler of optical wireless communication technology. In OCC, light-emitting diodes (LEDs) serve as the transmitter and rolling shutter (RS) cameras as the receiver for high-speed communication. However, the received luminance from the LED is critically important for reliable data retrieval in OCC, which faces challenges due to the inherent nature of data collection. Smartphones, typically equipped with RS cameras, represent one of the most promising platforms for the commercial deployment of OCC. To ensure system reliability, various methods have been proposed to address the diversity in pixel illumination values captured by smartphone cameras. Furthermore, AI-based approaches that make RS cameras compatible with low-speed mobile scenarios and enhance overall system performance also introduce additional system complexity. In this paper, we provide a systematic review of the state-of-the-art methods for data retrieval in smartphone camera-based OCC. In particular, we provide specific challenges due to important factors, such as communication distance variation and the blooming effect. Furthermore, we discuss recent advancements, especially promising AI applications in OCC. Finally, we outline open research directions on smartphone camera-based OCC.
Optical Camera Communication (OCC), Rolling Shutter Camera, Smartphone-based OCC, Blooming Effect.
Neil Langmead, University of Bath, United Kingdom
Modern software development faces significant challenges in release cycle optimization, particularly when addressing the fundamental question: what is the minimum effort required to release a software change? This paper introduces the concept of the NULL Release—a theoretical baseline representing the overhead of releasing software with zero functional changes—and proposes architectural approaches using Dependency Structure Matrices (DSMs) integrated into DevOps pipelines to minimize release cycle time. We present the T(x) pipeline model, a multi-tiered continuous integration framework that incorporates architectural analysis at each stage. A key contribution is demonstrating how DSM partitioning enables identification of independent subsystems, allowing parallel build and test execution with theoretical speedups exceeding 3x. Drawing on industrial case studies from Siemens Healthineers and the SmartBuild approach, we demonstrate how DSM-based architectural analysis can be automated within CI/CD pipelines to detect architectural violations, assess change impact, optimize build dependencies, and enable parallel execution.
DevOps, Dependency Structure Matrix, Continuous Integration, Software Architecture, NULL Release, CI/CD Pipeline, Build Parallelization
Mohammad Mansoori, Amira Soliman, and Farzaneh Etminani , Center for Applied Intelligent Systems Research (CAISR),Halmstad University, Sweden
Clinical notes contain unstructured text provided by clinicians during patient encounters.These notes are usually accompanied by a sequence of diagnostic codes following the International Classifi-cation of Diseases (ICD). Correctly assigning and ordering ICD codes is essential for medical diagnosis andreimbursement. However, automating this task remains challenging. State-of-the-art methods treated thisproblem as a classification task, leading to ignoring the order of ICD codes that is essential for differentpurposes. In this work, as a first attempt, we approach this task from a retrieval system perspective toconsider the order of codes, thus formulating this problem as a classification and ranking task. Our resultsand analysis show that the proposed framework has a superior ability to identify high-priority codes com-pared to other methods. For instance, our model’s accuracy in correctly ranking primary diagnosis codes is˜47%, compared to ˜20% for the state-of-the-art classifier. Additionally, in terms of classification metrics,the proposed model achieves a micro- and macro-F1 scores of 0.6065 and 0.2904, respectively, surpassingthe previous best model with scores of 0.597 and 0.2660.
generative language models, learning to rank, automatic medical coding, ICD coding, elec-tronic health records, pre-trained language models.
Sasirekha Oguri, John R. Talburt, and Mert Can Cakmak Center for Entity Resolution and Information Quality (ERIQ)University of Arkansas - Little Rock , USA
Entity resolution (ER) typically relies on pairwise similarity comparisons between records,which limits its ability to capture indirect relationships present in demographic occupancy data. An im-portant indirect pattern arises from household movement, where multiple individuals relocate togetheracross addresses, but detecting such patterns is difficult due to mixed-format records, noise, duplication,and the absence of stable identifiers. This paper proposes an AI-enhanced framework for detecting indirectentity links associated with household movement in unstandardized name–address data. The approachintegrates prompt-based large language model (LLM) named entity recognition for extracting personalnames and addresses without extensive preprocessing, semantic text embeddings for robust similaritycomputation, and graph-based reasoning to infer group-level movement patterns. Experimental evaluationon SPX benchmark datasets (S8–S12) generated using the Synthetic Occupancy Generator demonstratesthat incorporating indirect household movement evidence improves recall by 8–15% while maintaining highprecision, yielding F1-score gains of 6–8% over a strong pairwise baseline.
Entity Resolution, Household Movement Detection, Indirect Linkage, Named Entity Recog-nition, Large Language Models, Semantic Text Embeddings, Graph-Based Clustering, Occupancy Data,Synthetic Data, Data Integration
Siying Wang1, Xuan Wang2, Yining Tang3, Chao Wu3, 1School of Information Resources Management, Renmin University of China, Beijing, China,2China Media Group, Beijing, China ,3School of Public Affairs, Zhejiang University, Hangzhou, China
Simulating public opinion evolution is a core focus of computational social science. Traditional agent-based models rely on predefined heuristic rules, failing to capture the semantic features and cognitive processes of human natural language interactions. While large language models offer new approaches for artificial society construction, existing frameworks have limitations in scalability and memory management. Taking the Fukushima nuclear wastewater discharge event as the background, this study uses an open-source multi-agent social simulation framework, designing four progressive intervention scenarios to analyze agents cognitive synergy and public opinion trajectories. Results show the framework mitigates role drift and premature consensus, reproduces the public opinion evolution trajectory, providing empirical insights for policy testing and LLM-driven social computing.
Multi-agent simulation, Public opinion evolution, Nuclear wastewater discharge, Computational social science