![]() |
|
The Department of Electrical Engineering and Computer Science at the University of Wyoming is offering the EECS Colloquium series as a service to all who are interested in Electrical Engineering and Computer Science. Most seminars in Fall 2024 are scheduled for Monday 3:10PM -- 4:00PM in EERB 251. For help finding the locations of our seminar meetings, consult the on-line UWyo campus map. For questions about this page or to schedule talks, please contact Diksha Shukla: dshukla@uwyo.edu. Here is a list of seminar schedules. Previous EECS Colloquium Speakers. |
EECS Colloquium Schedule, Spring 2025
|
![]() |
EECS Colloquium Proof Verification with Lurch Plus Speaker: Kenneth G. Monks, University of Scranton When: 3:10PM ~ 4:00PM, Monday, March 31, 2025 Abstract: Would your students benefit from an easy-to-use, open-source, web-based word processor that could check their assigned mathematical proofs? In this talk, we introduce Lurch, our software project designed specifically for this purpose. We will explain how you can use this software and accompanying course materials and customize it for your own purposes. While existing proof verification tools like Lean, Isabelle, Coq, and Mizar are powerful and effective, they often have steep additional learning curves and can be difficult to customize. We will explain how the custom Lurch validation algorithm overcomes these challenges and pose some questions for future work. Additional information is available at lurch.plus. Bio: Kenneth G. Monks is a Professor of Mathematics at the University of Scranton, where he has taught since 1990. He holds a Ph.D. in Algebraic Topology from Lehigh University and has published in both pure and applied mathematics, with particular interests in discrete dynamical systems, the $3x+1$ problem, the cohomology of the Steenrod algebra, and computer formalization of mathematics. He is the author and lead developer of Lurch, a proof-checking word processor for teaching mathematical reasoning, and has mentored numerous student research projects. Dr. Monks is also the founding director of the Prove it! Math Academy. |
|
|
![]() |
EECS Colloquium Concept-based Semantic Analysis of Deep Neural Networks Speaker: Ravi Mangal, Colorado State University, Fort Collins, CO. When: 3:10PM ~ 4:00PM, Monday, April 14, 2025 Abstract: The analysis of vision-based deep neural networks (DNNs) is highly desirable but challenging due to the difficulty of expressing formal specifications for vision tasks and the lack of efficient verification procedures. In this talk, I will first describe a logical specification language designed to facilitate writing specifications about vision-based DNNs in terms of high-level, human-understandable concepts. I will then describe how we can use emerging multimodal, vision-language, foundation models (VLMs) as a lens to analyze vision models. In particular, I will demonstrate how we can leverage VLMs such as CLIP to encode our concept-based specifications and to design an efficient procedure for verifying vision models with respect to these specifications. Bio: Ravi Mangal is an assistant professor at Colorado State University. He is interested in all aspects of designing and applying formal methods for assuring the correctness and safety of software systems. His current research focuses on developing methods for formally analyzing the safety and trustworthiness of learning-enabled systems. Previously, he was a postdoctoral researcher at Carnegie Mellon University in the Security and Privacy Institute (CyLab) and received his PhD in Computer Science from Georgia Institute of Technology. |
|
|
![]() |
EECS Colloquium Understand Augmented Reality Notification: From Displays to Interaction Speaker: Francisco R. Ortega, Colorado State University, Fort Collins, CO. When: 3:10PM ~ 4:00PM, Friday, April 25, 2025 Abstract: Notifications are essential for keeping users informed but can also be disruptive. With XR headsets becoming more compact and integrated into daily life, managing notifications effectively is crucial. These headsets occupy the user’s field of view, requiring safe and efficient UI design, especially with something as distracting as notifications. Our research on AR notification positioning found that placing them near relevant objects, the user’s task, or in the bottom center of their field of view improves usability, with spatial sound enhancing recognition. Users preferred touch interaction for speed and ease, while multimodal input could serve as a backup when touch is impractical, prioritizing stability and reliability. To understand real-world behavior, we conducted a five-day study where participants used smartglasses displaying their phone notifications, logging experiences through diaries and interviews. We found that the benefit of notifications was strongly influenced by the context of not just the notification itself, but also of the current user's context. Bio: Francisco R. Ortega is an Associate Professor at Colorado State University (CSU) and has been Director of the Natural User Interaction lab (NUILAB) since Fall 2018. Dr. Ortega earned his Ph.D. in Computer Science (CS) in the field of Human-Computer Interaction (HCI) and 3D User Interfaces (3DUI) from Florida International University (FIU). He also held the Post-Doc and Visiting Assistant Professor position at FIU between February 2015 and July 2018. His research has focused on (1) multimodal and unimodal interaction (gesture-centric), which includes gesture elicitation (e.g., a form of participatory design), (2) information access effort in augmented reality (e.g., visual cues and automation bias), (3) AR notifications, and (4) stress reduction using virtual reality forest bathing. For multimodal interaction research, Dr. Ortega focuses on improving user interaction by (a) multimodal elicitation, (b) developing interactive techniques, and (c) improving augmented reality visualization techniques. The primary domains for interaction include general environments, immersive analytics, and VR sketching. His research has resulted in over 150 peer-reviewed publications, including books, journals, conferences, workshops, and magazine articles, in venues such as ACM CHI, ACM VRST, IEEE VR, IEEE TVCG, IEEE ISMAR, ACM PACMHCI, ACM ISS, ACM SUI, IEEE 3DUI, HFES, and Human Factor Journals, among others. Dr. Ortega has experience with multiple projects awarded by the government. For example, Dr. Ortega was a co-PI for the DARPA Communicating with Computers project. He is currently PI for a 4-year effort for ONR titled Perceptual/Cognitive Aspects of Augmented Reality: Experimental Research and a Computational Model. He was recently awarded a new ONR grant titled “Assessing Cognitive Load and Managing Extraneous Load to Optimize Training.” The National Science Foundation and other agencies and companies have also funded him. This includes the NSF CAREER 2023 for microgestures and multimodal interaction. Since his tenure-track appointment at CSU in August 2018, Dr. Ortega has brought over 5 million dollars in external funding (with 4.5 million as principal investigator). Finally, Dr. Ortega is committed to diversity and inclusion, and his mission is to increase the number of underrepresented minorities in CS, rooted in his own experiences and from the time spent at FIU – the largest R1 Hispanic serving institution. His lab website is https://nuilab.org. |
|
|
![]() |
EECS Colloquium Motion Planning for Multipurpose Autonomous Systems Speaker: Dr. Juan D. Hernández, School of Computer Science and Informatics, Cardiff University, UK. When: 3:10PM ~ 4:00PM, Monday, May 05, 2025 Abstract: Once limited to highly controlled industrial environments only, today robots are continuously evolving towards becoming autonomous entities, capable of operating in changing and unstructured settings. This evolution is only possible due to multi-disciplinary efforts, which endow autonomous systems with the required capabilities to deal with such changing and uncertain conditions. One of these disciplines, commonly known as motion planning, consists in computational techniques that calculate collision-free and feasible motions of autonomous systems. In this talk, we will discuss how motion planning is being used to improve the decision-making capabilities of different types of autonomous systems such as autonomous underwater vehicles (AUVs), autonomous/automated cars and service robots. Bio: Juan D. Hernández received the BSc degree in electronic engineering from the Pontifical Xavierian University (Cali, Colombia) in 2009, the MSc degree in robotics and automation from the Technical University of Madrid (Spain) in 2012, and the PhD degree in technology (robotics) from the University of Girona (Spain) in 2017. He worked as a Robotics Research Engineer at the Netherlands Organisation for Applied Scientific Research (TNO) in 2017-2018. He was a Postdoctoral Research Associate at Rice University (Houston, TX, USA) in 2018-2019. He was a Senior Engineer for simulation of autonomous systems at Apple Inc. (Sunnyvale, CA, USA) in 2019-2020. In December 2020, he joined Cardiff University, where he is now an Associate Professor (UK Senior Lecturer) in the School of Computer Science and Informatics. His research activity is focused on hybrid motion planning approaches that combine classical model-based methods with machine learning techniques, as well as the use of multimodal interfaces that enhance human-robot interaction and collaboration. Dr Hernández is a Senior member of the IEEE and its Robotics and Automation Society. |
|
|
![]() |
EECS Colloquium An AI-ML based Adaptive Control approach for Enhancing the Maintenance-Free Operating Period (MFOP) of Wind Turbine Generators Speaker: Dr. Sudha Radhika, BITS Pilani, Hyderabad Campus, Hyderbad, India. When: 3:10PM ~ 4:00PM, Friday, May 23, 2025 Abstract: The increasing reliance on renewable energy, especially Off-shore wind turbines as a sustainable power source necessitates the development of advanced maintenance strategies to ensure uninterrupted operation of their critical components. The rapid advancements in AI/ML based technologies have transformed predictive maintenance and operational control of such critical assets like wind turbine generators. This talk explores the applications of AI/ML-based Remaining Useful Life (RUL) prediction and the modelling of an adaptive control systems that aims at enhancing the longevity, reliability, and operational efficiency of wind generators. The AI/ML models utilize real-time sensor data (e.g., vibration, temperature, current, and voltage) to predict the RUL of key generator components, enabling proactive maintenance scheduling, fault detection, and anomaly diagnosis. Advanced machine learning techniques, such as Long Short-Term Memory (LSTM) networks, Convolutional Neural Networks (CNNs), and hybrid models, facilitate accurate RUL estimation, reducing unplanned downtime and maintenance costs. Further the talk also tries to cover the future expansion of this idea towards the integration of Digital Twin modelling which offers a real-time virtual representation of physical generators, enabling continuous monitoring, control, and optimization of machine performance. Digital Twins provide a platform for stress testing, performance optimization, and predictive control, where operational parameters are automatically adjusted based on AI-driven insights. By implementing a closed-loop control system, operators can achieve dynamic adjustments to generator operations, thereby reducing wear and tear, optimizing fuel consumption, and extending the useful life of critical components. This talk highlights the synergy between AI/ML-based RUL prediction and Digital Twin-driven control, emphasizing their combined potential to transition from reactive to predictive maintenance paradigms. Bio: Dr. Sudha Radhika is an Associate Professor at BITS Pilani, Hyderabad Campus, Telangana, India. She joined BITS in 2017 after earning her Ph.D. from Tokyo Polytechnic University, Japan. With over 20 years of experience in teaching and research, Dr. Radhika’s work spans a broad spectrum within the field of pattern recognition, with a focus on AI/ML-based health condition monitoring, wavelet-based remaining useful life (RUL) prediction, image and video processing, and digital twinning. She has led four funded research projects and has authored over 50 publications in reputed international journals, including SCI-indexed venues. Her intellectual contributions include three filed patents. She has successfully supervised three Ph.D. scholars and currently mentors nine more. In addition to her academic work, Dr. Radhika is also a co-director of the startup Yantraayush, which develops integrated hardware and software solutions aimed at predicting RUL and extending the operational lifespan of various types of machines. |