Workforce Rewired Daily Briefing | Wednesday, April 15, 2026
Today’s news draws a sharp line between who holds power in the AI transition and who doesn’t: the federal government is automating its own workforce after cutting 40% of it, LinkedIn is building a marketplace to harvest professional expertise for AI training, and a global survey of 39,000 workers finds that fewer than 1 in 4 believe their job is safe. Workers are watching institutions move fast in directions that don’t include them.
By the Numbers
22% of workers globally strongly agree their job is safe from elimination, per ADP Research’s survey of 39,000 workers across 36 countries. Among individual contributors, that number drops to 18%.5.3x: workers whose employers invest in their skills are 5.3 times as likely to feel their job is secure, per the same ADP survey. The gap between invested and uninvested workers is the single largest driver of job security confidence in the dataset.40% of GSA’s total workforce has been eliminated since October 2024. The agency is now targeting 1 million automated work hours through its internal AI tool, USAi, to maintain capacity with far fewer people.Up to $150 per hour: the top advertised rate for AI trainer positions in LinkedIn’s new AI labor marketplace, confirmed by the company to Business Insider. The lowest-tier roles start around $40 per hour.100,000+ New York State employees across 50+ agencies are now eligible for AI training under Governor Hochul’s statewide expansion, making New York the largest state in the country to provide AI tools and training to its entire state workforce.
Layoffs and Company Decisions
Only 1 in 4 Workers Worldwide Believe Their Job Is Safe from AI, ADP Survey Finds
ADP Research’s “Today at Work 2026” report, released March 25 and based on responses from more than 39,000 workers across 36 countries, finds that only 22% of workers globally strongly agree their job is safe from elimination. The anxiety is steepest at the bottom of the organizational chart: just 18% of individual contributors feel secure, compared with 31% of upper managers and 35% of C-suite executives. The data reveals a sharp divide driven not just by job level but by whether workers feel their employer is investing in them: those who believe their company is developing their skills are 5.3 times more likely to feel their job is secure, and are six times more likely to be fully engaged at work.
Why it matters: With 39,000 respondents across 36 markets, this is one of the largest workforce sentiment datasets on AI and job security ever collected. The finding that employer investment in skills is the single strongest predictor of felt security is a direct, actionable data point for HR leaders: the choice to invest in worker development is not just an engagement tool, it is the most reliable lever for reducing the fear that undermines adoption, retention, and performance.
LinkedIn Tests an AI Labor Marketplace, Paying Professionals Up to $150 an Hour to Train AI Models
LinkedIn confirmed to Business Insider on April 13 that it is in early-stage testing of an “AI labor marketplace,” a new platform where professionals can earn between $40 and $150 per hour to train AI systems. The roles involve rating AI responses, flagging errors, and conducting “red teaming” to expose model weaknesses. The highest-paid positions target senior software engineers, while roles for finance and Excel experts pay up to $100 per hour, and nursing professionals are also listed. LinkedIn described AI training as “one of the fastest-growing jobs in the U.S. right now.” The initiative would put LinkedIn in direct competition with AI training startups like Mercor and Scale AI.
Why it matters: LinkedIn’s move signals that the market for human expertise in AI training is now large enough for a major professional platform to formalize it. For workers whose traditional roles are being displaced by AI, this represents a genuinely new category of work, and LinkedIn’s entry gives it mainstream visibility it has not had before. The platform also has access to the professional credentials and work histories needed to vet trainers at scale, which could make it the dominant clearing house for this kind of labor.
Business Insider via DNYUZ, April 13, 2026
Policy and Government
After Cutting 40% of Its Staff, GSA Sets a Target of Automating 1 Million Work Hours with AI
GSA Deputy Director Michael Lynch announced at an industry conference this week that the General Services Administration has launched a “million hours challenge” for its internal AI tool USAi, aimed at automating a substantial portion of the work previously performed by federal employees and contractors. The agency framed the initiative around an “EOA” model: eliminate, optimize, automate. The backdrop is stark: since October 2024, GSA has shed nearly 40% of its total workforce, and the Public Buildings Service has lost about 45% of its staff. A million work hours amounts to roughly one year of standard full-time work from 500 employees. Lynch indicated the program could expand to other agencies if it proves successful.
Why it matters: GSA is the federal government’s real estate and procurement backbone, and it is now explicitly using AI to absorb the workload left by mass human departures rather than rebuilding headcount. This is the sharpest example yet of a large institution treating AI not as an augmentation tool but as a workforce replacement strategy. If other agencies follow the same playbook, the federal government may emerge from this period structurally smaller and permanently more dependent on AI for core operations.
Federal News Network, April 2026
New York Becomes the Largest State to Deploy AI Training Tools to Its Entire Workforce
Governor Kathy Hochul announced on April 6 the statewide expansion of AI training and tools to all New York State employees, covering more than 100,000 workers across 50 agencies. The centerpiece is AI Pro, a secure generative AI assistant developed by the New York Office of Information Technology Services and powered by Google Gemini, which gives state employees a vetted environment to develop AI skills on the job. A two-part training program covers both responsible AI use as a public servant and hands-on application of the tool. The expansion follows a successful fall 2025 pilot with 1,200 participants across eight agencies and fulfills a pledge from the Governor’s 2025 State of the State address. Agencies that elect to use AI Pro are required to complete responsible AI training.
Why it matters: Most government AI initiatives focus on AI procurement or regulation. This one focuses on the people side: ensuring that workers who serve the public have both the skills and the ethical framework to use AI in their jobs. At 100,000 employees, New York’s program is the largest state-level workplace AI training deployment in the country, and it arrives alongside the Governor’s separately launched FutureWorks Commission on AI’s broader impact on New York’s workforce.
Office of Governor Kathy Hochul, April 6, 2026
Reskilling and Education
University of Minnesota Launches “AI for All” Courses and a New AI Minor for Every Major
The University of Minnesota’s College of Science and Engineering announced on March 24 that it will launch two flagship “AI for All” courses and a new AI minor beginning in fall 2026, designed for students in any major across the university. The curriculum was built explicitly to turn every student, regardless of discipline, into an AI-literate professional. Alongside the courses, the college is opening a new AI Makerspace, a hybrid physical and digital learning environment for hands-on AI exploration. The initiative targets students in fields from business and the humanities to health sciences, positioning AI literacy not as a technical specialization but as a foundational professional competency.
Why it matters: The Minnesota approach differs from most university AI initiatives in two ways: it is designed for every major rather than concentrated in CS or data programs, and it pairs coursework with a physical lab that gives students hands-on time with AI systems rather than just conceptual exposure. As a flagship state university with a large student population across diverse fields, the scale and cross-disciplinary scope of this program will be worth watching as a model for others.
University of Minnesota College of Science and Engineering, March 24, 2026
What Workforce Leaders Are Watching
GSA’s playbook, cut staff first and automate the workload second, is now visible. If this becomes the federal template, it raises a direct question for every institution watching: are you planning AI deployment around what people can do with it, or around how many people you can replace with it?
The ADP finding that employer investment in skills is the strongest predictor of felt job security is a direct counterargument to the idea that displacement anxiety is just fear of the unknown. Workers are not simply scared of AI: they are reading the actions of their employers and drawing reasonable conclusions. What signal is your organization currently sending?
LinkedIn’s AI trainer marketplace formalizes a new category of work at scale, but it also raises a harder question: should AI training be counted as reskilling, or is it a new form of precarious knowledge work that happens to pay well in the short term?
The University of Minnesota is betting that AI literacy belongs in every major, not just computing. As employers increasingly use AI fluency as a hiring filter across all roles, the institutions that build it into every student’s experience rather than housing it in technical departments may produce the workforce that meets the market.
This briefing was prepared automatically by your Workforce Rewired research assistant. All stories include direct source links.



