Opinion: Poway Unified’s new AI policy needs more work

Poway Unified School District Headquarters.
Poway Unified School District Headquarters.
Poway Unified School District headquarters. (File photo courtesy of the district)

The politics-administration dichotomy, which assumes that elected officials set policy and professional administrators execute it, serves as a major concept for democratic governance. However, this ideal strains under the pressure of “exceptional cases” — moments of rapid change or high-stakes ambiguity that necessitate direct political orientation to ensure the spirit of the law is translated into administrative action.

Opinion logo

The introduction of artificial intelligence policy into the K-12 environment of the Poway Unified School District represents precisely such an exceptional case. While the draft policy presented by the school board establishes broad, sound principles, its very generality, coupled with external pressures and local concerns, creates significant ambiguities that the board must address directly rather than deferring entirely to the superintendent.

Failure to provide specific interpretive direction before the second reading scheduled for Dec. 4 would be an abdication of necessary political leadership.

The first critical area is the draft policy’s approach to academic integrity. Two key principles ─ ethical use and accountability ─ establish the foundation for behavioral expectations. Yet, the board’s silence on the single most pressing concern for teachers — cheating — grants administrators undue discretion.

By opting for the euphemism of “ethical and responsible use” over the explicit terms “academic dishonesty” or “cheating,” the board avoids defining the line between permissible AI-augmented student work and fraudulent submission. A presentation to the board explained the range of permitted and prohibited uses, but a more explicit conversation would highlight a major worry for teachers.

In the face of technological novelty, ambiguity is a systemic risk. Without explicit guidance, the superintendent and staff could interpret “ethical use” through a purely administrative lens of fairness in access to resources, potentially ignoring the disciplinary urgency felt by classroom educators.

Board members must use the second reading of the policy to either insert specific language or issue a formal clarifying statement that equates the “responsible use” of AI with the existing tenets of academic integrity, thereby providing the administration with a clear mandate for establishing specific disciplinary protocols. To address the problem with “niceness” rather than concrete terminology is to intentionally undermine the policy’s practical application in the classroom.

The second, and perhaps more crucial issue, is what the policy describes as “careful consideration of potential biases.” Recent research by groups like the Anti-Defamation League identifying anti-Jewish and anti-Israel material in major AI models, combined with state-level legislation like Assembly Bill 715, transforms “potential biases” into an immediate threat that must be actively mitigated.

The board’s role here is not merely to create a policy document but to ensure that administrative priorities reflect legislative intent and public safety concerns. At a second reading, the board must publicly acknowledge the specific threats of AI-generated hate speech and issue an explicit directive requiring monitoring of AI content.

Finally, the draft policy mandates “professional development for staff,” but this principle must first apply to the board itself. To properly direct the superintendent in these exceptional cases, the board members need more than a passive presentation; they require interactive, hands-on training to understand the nuances of AI.

Poway Unified’s draft AI policy is a strong start, but it leaves fundamental questions of academic integrity and political bias unanswered. The second reading is a crucial opportunity to practice the art of informed governance.

By intervening to clarify what is prohibited (cheating) and what must be monitored (antisemitism and other hate speech), the board will fulfill its duty to provide the clear political mandate required to guide administrative implementation through the complex, exceptional terrain of artificial intelligence.

Joe Nalven is an adviser to the Californians for Equal Rights Foundation and a former associate director of the Institute for Regional Studies of the Californias at San Diego State University. He used Google Gemini as an aid in drafting this column.

 

Want more insights? Join Grow With Caliber - our career elevating newsletter and get our take on the future of work delivered weekly.