Probabilistic Safety Guarantees for Learned Control Barrier Functions: Theory and Application to Multi-Objective Human-Robot Collaborative Optimization
Abstract
Designing provably safe controllers for high-dimensional nonlinear systems with formal guarantees represents a fundamental challenge in control theory. While control barrier functions (CBFs) provide safety certificates through forward invariance, manually crafting these barriers for complex systems becomes intractable. Neural network approximation offers expressiveness but traditionally lacks formal guarantees on approximation error and Lipschitz continuity essential for safety-critical applications. This work establishes rigorous theoretical foundations for learned barrier functions through explicit probabilistic bounds relating neural approximation error to safety failure probability. The framework integrates Lipschitz-constrained neural networks trained via PAC learning within multi-objective model predictive control. Three principal results emerge: a probabilistic forward invariance theorem establishing P(violation)<= T delta local+exp(-hmin2/(2L2T sigma 2)), explicitly connecting network parameters to failure probability; sample complexity analysis proving O(N-1/4) safe set expansion; and computational complexity bounds of O(H3m3) enabling 50 Hz real-time control. An experimental validation across 648,000 time steps demonstrates a 99.8% success rate with zero violations, a measured approximation error of sigma=0.047 m, a matching theoretical bound of sigma <= 0.05 m, and a 16.2 ms average solution time. The framework achieves a 52% conservatism reduction compared to manual barriers and a 21% improvement in multi-objective Pareto hypervolume while maintaining formal safety guarantees.
Más información
| Título según WOS: | ID WOS:001688020900001 Not found in local WOS DB |
| Título de la Revista: | MATHEMATICS |
| Volumen: | 14 |
| Número: | 3 |
| Editorial: | MDPI AG |
| Fecha de publicación: | 2026 |
| DOI: |
10.3390/math14030516 |
| Notas: | ISI |