With the growing security challenges at the intersection of distributed machine learning and malicious interference, there are growing challenges that federated learning can address. Federated learning enables collaborative model training across devices while preserving data privacy. However, this decentralized nature also opens new vulnerabilities, particularly to adversarial attacks and data poisoning, where malicious actors can inject corrupted data or manipulate updates to degrade models or extract sensitive information. As the adoption of federated learning accelerates, understanding and these threats are essential to ensure model integrity and resilience in real-world situations. Adversarial AI and Data Poisoning in Federated Learning provides a comprehensive examination of emerging threats, attack vectors, and defense mechanisms within federal learning systems. This book highlights vulnerabilities of federated learning architectures, explores strategies for detection and mitigation of adversarial threats, and presents real-world case studies.
Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.
Dr. Shikha Khullar is an accomplished academic and researcher with a Ph.D. in Data Mining and Artificial Intelligence. She currently serves as an Associate Professor at Poornima University, where she is recognized for her dedication to academic excellence and her dynamic involvement in radical research. With a deep-rooted passion for innovation and discovery, Dr. Khullar actively engages in interdisciplinary research that bridges technology and real-world applications, particularly in areas such as intelligent systems, smart data analytics, and sustainable digital solutions. Her work is characterized by a forward-thinking approach, where she not only explores theoretical models but also contributes to practical implementations that address pressing societal and industrial challenges. At Poornima University, she has been instrumental in mentoring young minds, guiding Ph.D. scholars, and contributing to curriculum development in emerging areas such as Artificial Intelligence, Blockchain, and Smart Agriculture Technologies. She is the member of IAENG and ERDA.
„Über diesen Titel“ kann sich auf eine andere Ausgabe dieses Titels beziehen.
Anbieter: CitiRetail, Stevenage, Vereinigtes Königreich
Hardcover. Zustand: new. Hardcover. With the growing security challenges at the intersection of distributed machine learning and malicious interference, there are growing challenges that federated learning can address. Federated learning enables collaborative model training across devices while preserving data privacy. However, this decentralized nature also opens new vulnerabilities, particularly to adversarial attacks and data poisoning, where malicious actors can inject corrupted data or manipulate updates to degrade models or extract sensitive information. As the adoption of federated learning accelerates, understanding and these threats are essential to ensure model integrity and resilience in real-world situations. Adversarial AI and Data Poisoning in Federated Learning provides a comprehensive examination of emerging threats, attack vectors, and defense mechanisms within federal learning systems. This book highlights vulnerabilities of federated learning architectures, explores strategies for detection and mitigation of adversarial threats, and presents real-world case studies. This item is printed on demand. Shipping may be from our UK warehouse or from our Australian or US warehouses, depending on stock availability. Bestandsnummer des Verkäufers 9798337362243
Anzahl: 1 verfügbar
Anbieter: AussieBookSeller, Truganina, VIC, Australien
Hardcover. Zustand: new. Hardcover. With the growing security challenges at the intersection of distributed machine learning and malicious interference, there are growing challenges that federated learning can address. Federated learning enables collaborative model training across devices while preserving data privacy. However, this decentralized nature also opens new vulnerabilities, particularly to adversarial attacks and data poisoning, where malicious actors can inject corrupted data or manipulate updates to degrade models or extract sensitive information. As the adoption of federated learning accelerates, understanding and these threats are essential to ensure model integrity and resilience in real-world situations. Adversarial AI and Data Poisoning in Federated Learning provides a comprehensive examination of emerging threats, attack vectors, and defense mechanisms within federal learning systems. This book highlights vulnerabilities of federated learning architectures, explores strategies for detection and mitigation of adversarial threats, and presents real-world case studies. This item is printed on demand. Shipping may be from our Sydney, NSW warehouse or from our UK or US warehouse, depending on stock availability. Bestandsnummer des Verkäufers 9798337362243
Anzahl: 1 verfügbar
Anbieter: AHA-BUCH GmbH, Einbeck, Deutschland
Buch. Zustand: Neu. Neuware - With the growing security challenges at the intersection of distributed machine learning and malicious interference, there are growing challenges that federated learning can address. Federated learning enables collaborative model training across devices while preserving data privacy. However, this decentralized nature also opens new vulnerabilities, particularly to adversarial attacks and data poisoning, where malicious actors can inject corrupted data or manipulate updates to degrade models or extract sensitive information. As the adoption of federated learning accelerates, understanding and these threats are essential to ensure model integrity and resilience in real-world situations. Adversarial AI and Data Poisoning in Federated Learning provides a comprehensive examination of emerging threats, attack vectors, and defense mechanisms within federal learning systems. This book highlights vulnerabilities of federated learning architectures, explores strategies for detection and mitigation of adversarial threats, and presents real-world case studies. Bestandsnummer des Verkäufers 9798337362243
Anzahl: 2 verfügbar