Saw the internal memo from Meta's head of people, they're making "AI-driven impact" a core expectation in performance reviews starting 2026. This feels like a watershed moment. Some quick thoughts on what this means operationally:
The AI literacy ladder is real now. You can't just say "use AI more." Companies need structured progression: basic tool usage → workflow design → full automation ownership. Meta's essentially saying fluency is no longer optional.
Change management becomes critical. The "AI first" mandate only works if you pair it with serious change management. We've seen this internally - if leadership isn't using these tools daily, adoption dies. Can't delegate the rebuild to engineers anymore; operators need to become builders.
The people-first tension. When you say "AI first," people hear "people second." That's not the point. The goal is removing cognitive load and rote work so teams can focus on strategic thinking and, frankly, better human connection. But that messaging has to be intentional.
Role evolution is coming. Some roles will be upskilled within the org. Others will find their skillset is more valuable elsewhere. The demand for people who can help organizations implement AI is going to be massive over the next decade.
One thing I'm curious about: how do you measure "AI-driven impact" without killing critical thinking? If everyone's overly reliant on AI outputs, do we lose the ability to challenge assumptions?
Would love perspectives from folks in larger orgs. Is your company starting to formalize AI expectations?
[link] [comments]