Abstract: Current defense mechanisms against model poisoning attacks in federated learning (FL) systems have proven effective up to a certain threshold of malicious clients (e.g., 25% to 50%). In this ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results