MOAC-VHP: A Reinforcement Learning Framework for Real-Time Interactive Visual Design

Dongli Si

Abstract


Optimizing interactive solutions in adaptive interface design is crucial as digital platform user interactions and design needs become increasingly complex. Existing systems use static heuristics or rule-based techniques that cannot adapt to real-time user behavior and multi-objective demands. The present research introduces VISO-RL, a reinforcement learning framework that uses a Multi-Objective Actor-Critic with Visual-Attentive Hierarchical Policy (MOAC-VHP) algorithm to improve adaptive interface design. This multidimensional dataset of 8,000 annotated visual samples from the Visual Communication Art Design dataset includes Click-through rates, gaze heatmaps, and scroll depth. These inputs are fed into the system in real-time to adjust design tactics within an adaptive feedback loop. To compare our technique to DRL-CAD, CLIP-RL-UI, and GCN-DRL in a simulated interactive environment that replicates web platform user behavior. Real-Time Multi-objective Alignment (RMA), Visual-Personalization Effectiveness (VPE), and Adaptive Interaction Responsiveness (AIR) were the three main performance criteria used in the experimental evaluation. VISO-RL exceeded baselines in responsiveness, personalization, and multi-objective balance with an AIR score of 3.24, VPE of 4.11, and RMA of 3.97. MOAC-VHP enhanced design responsiveness by 28% and user engagement by 32% compared to rule-based systems. These results demonstrate the model's ability to create adaptive, personalized, and durable visual interfaces in real-time interactive environments. The architecture combines hierarchical policies and attention, as tested by various experiments. Finally, VISO-RL optimizes adaptive interface design interaction techniques using advanced reinforcement learning.


Full Text:

PDF


DOI: https://doi.org/10.31449/inf.v49i31.9789

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.