Wisconsin has a lot of areas to improve on when it comes to cybersecurity, including gaps in statutory definitions, AI use and collaboration between groups, experts say.
A panel of experts during an Assembly Science, Technology, and AI Committee informational hearing yesterday provided several hours of insight into the current state of cybersecurity in Wisconsin. While the state does a good job with responding to and investigating cyberattacks, there is room for improvement when it comes to proactively preventing cyberattacks, experts said.
Mike Wyatt, Deloitte’s cybersecurity leader for state, local and higher education, noted the FBI identified cyber crime cost Wisconsin about $160 million of the roughly $16.6 billion in losses across the country.
But the state could work to improve its outlook by working toward a whole of state model where municipal, county and state governments collaborate and share data.
“From a policy perspective, whole-of-state is absolutely critical and needs to be top of mind,” Wyatt said. “Legislators in New York, Oregon, Iowa, Texas and a number of other states have established cross-government cyber policies with common standards, shared funding and statewide training. A robust, scalable approach that Wisconsin may wish to consider.”
Wisconsin, like everywhere else nowadays, is also vulnerable to cyberattackers using AI to glean as much information as possible from public reports and meetings, Trevor Johnson, head of Google’s midwest division for state and local government, said.
“There were references [earlier in the hearing] to using AI in order to read some of the Legislative Audit Bureau reports,” he said. “Unfortunately, malicious actors will also use AI to read those reports and understand, ‘Where might I be able to find a nugget that will help me get into this particular system.’ They’re using it for research. They can even use AI on top of videos or recordings of sessions such as this one; understand what things might be said that might give me a vector to get at some information.”
And while companies such as Google, Microsoft and others have created safeguards to block malicious actors from prompting AI to create a piece of code to penetrate a system, those malicious actors are also always working on ways to circumvent the safeguards, Johnson added.