Skip to main content

Issue #10A

🌱 From the AI Frontier (without the hype)

Welcome to Newsletter #10A (February)

This month’s theme: AI is becoming less of a “tool you try” and more of a workflow you live inside—in software, healthcare, education, and research. Below you’ll find campus updates, world highlights, and a curated set of links that are worth your time (and won’t punish your calendar).

Innovating Accessibility: AI at the Job Accommodation Network
Friday, March 27 | 12:00 PM | Allen Hall 414
Dr. DJ Hendricks, Director of the Job Accommodation Network, will discuss the development of an online chatbot they integrated into the JAN website and how it may transform access to workplace accommodation guidance.

    News of AI Around the World—Recent Highlights

  1. Experts Debate the Next 5 Years of AI: “Transformative”… and Eventually “Boring”

    Follow the info:

    Summary:

    A panel-style debate surfaced a useful paradox: AI may become the most transformative technology of the era and eventually fade into the background like GPS—everywhere, essential, and less headline-worthy. The most actionable takeaway for campuses is a mindset shift: treat AI as (1) a construction tool that helps build bigger projects (analysis, software, research workflows), and (2) avoid using it as a gym substitute where the point is to strengthen your own thinking (writing practice, foundational reasoning). That framing is particularly relevant for course design and student skill development.

  2. OpenAI’s Codex App: Multi-Agent “Command Center” for Coding Work

    Follow the info:

    Summary:

    OpenAI’s Codex app is built for working with multiple coding agents in parallel, each running in its own thread, organized by project. Instead of one chat that tries to do everything, you can delegate tasks (debugging, refactors, tests, docs) to separate agents, review diffs, comment, and iterate without losing context. The bigger signal: “coding assistance” is shifting from autocomplete to coordinated task execution, which may soon reshape how research software, campus tools, and admin automation get built.

  3. Quick Primer (Short + Useful): Open vs. Closed AI Models

    Follow the info:

    Summary:

    • Open models publish weights and/or runnable releases so organizations can deploy locally, audit behavior, and tailor to sensitive workflows.
    • Closed models are accessed through hosted services/APIs; they can be powerful and convenient but limit transparency and local control.

      This matters because local deployment is increasingly tied to privacy, cost control, and campus governance.

  4. StepFun’s Step 3.5 Flash: Fast, Local, Long-Context AI for Developers

    Follow the info:

    Summary:

    Step 3.5 Flash is a high-end open model designed for efficient real-world work: it uses a sparse Mixture-of-Experts design (196B total parameters, ~11B active per token) to deliver strong reasoning with lower runtime cost. It also emphasizes long context (useful for big codebases, long documents, research notes) and high throughput, which enables more responsive “agent” workflows. For research and education, this pushes a practical point: advanced AI is increasingly feasible on local or institution-controlled infrastructure, not only via cloud APIs.

  5. Moltbook: Viral “AI-Only” Social Forum Meets Security Reality

    Follow the info:

    Summary:

    Moltbook—an AI-agent-only forum—went viral as a glimpse of “agents interacting at scale,” but security researchers quickly reported serious exposure risks (including sensitive data and keys) tied to rushed deployment practices. The broader lesson is not “agents are spooky,” but more practical: agentic systems expand the attack surface because they connect models to tools, credentials, and actions. For institutions, Moltbook is a timely case study: if AI agents are given access to files, code execution, or messaging, then security-by-design and governance can’t be optional add-ons.

  6. Google’s Conductor: Track-Based, Documented AI Coding Workflows

    Follow the info:

    Summary:

    Conductor reshapes “AI coding” into a structured workflow: context files + specs + plans stored as Markdown alongside the code, then implementation follows that plan. The key idea is persistence and reproducibility: instead of fragile chat history, project knowledge becomes versioned artifacts that teams can review, reuse, and audit. This is especially relevant for university environments where continuity, handoff, and documentation matter (students graduate; staff rotate; projects live for years).


  7. 🩺 Public Services and Health

  8. DoctorSV: El Salvador’s National AI-Enabled Digital Health Push

    Follow the info:

    Summary:

    DoctorSV is a nationwide digital health platform in El Salvador combining telemedicine-style access with AI-assisted workflows and patient records through an app interface. The significance isn’t just the tech—it’s the scale: a government using AI-enabled digital infrastructure as a public service delivery mechanism. For universities, it’s a real-world reference point for research and discussion around AI deployment, equity in access, and governance in high-stakes domains.

  9. ChatGPT Health: Personalized Health Context Meets Guardrails Questions

    Follow the info:

    Summary:

    OpenAI introduced ChatGPT Health, enabling users to connect medical records and wellness apps so responses can be informed by personal health context (e.g., test results, appointments, lifestyle data). The near-term impact is likely patient-facing: better question prep, clearer interpretation of trends, and improved navigation of complex information—while raising serious ongoing questions around accuracy, privacy, and appropriate boundaries in health guidance.


  10. 🎨 Scientific Communication

  11. FigureLabs: AI for Publication-Ready Scientific Figures

    Follow the info:

    Summary:

    FigureLabs is positioned as an AI-driven tool for generating and refining scientific visuals—helping convert ideas, sketches, or drafts into clearer, publication-style figures. The immediate usefulness for research is obvious: figures are often the slowest part of paper prep and grant communication. Tools like this could reduce friction in making clean schematics, graphical abstracts, workflow diagrams, and explanatory visuals—especially helpful for labs without dedicated design support.

  12. AI Research in Education—Fresh Insights

  13. “Ask AI How to Think, Not What to Think”

    Follow the info:

    Summary:

    Evidence from creativity research suggests AI is most useful when it provides methods, frameworks, or reasoning approaches rather than final answers. Asking “how should I approach this?” tends to expand exploration; asking “give me the idea” can narrow it. This aligns neatly with strong pedagogy: AI is best used to amplify thinking processes—especially for brainstorming, research planning, and structured critique.

  14. Selected AI Research Breakthroughs

  15. Hybrid AI + Physics Improves Extreme Weather Forecasting

    Follow the info:

    Summary:

    A hybrid approach combines AI with physics-based climate models to better forecast rare extremes (heatwaves, intense rainfall) faster than standard simulation alone. The message for researchers is broader than climate: the most robust “scientific AI” often comes from AI + mechanistic models, not AI alone—especially when extrapolating beyond historical training data.

  16. AI Boosts Individual Scientists—But May Narrow Collective Discovery

    Follow the info:

    Summary:

    A Nature analysis reports a productivity paradox: AI tools can amplify output and citations at the individual level, while potentially narrowing topic diversity and follow-on exploration at the field level. The takeaway for institutions is not to slow adoption, but to pair AI acceleration with incentives for novelty, diversity of questions, and exploratory work that doesn’t simply optimize for speed.

  17. Prompting Tip of the Week (Keep it Simple, Keep it Useful)

    Single-Shot vs. Step-Structured Prompts (Research Example)

    Follow the format: Copy/paste-ready prompts below.

    Title: Don’t Just Ask for Output—Ask for a Process

    Link: (Internal tip—no link)

    Summary:

    Single-shot prompt (often shallow):

    Summarize this abstract and identify limitations: [paste abstract]

    Better step-structured prompt (more reliable):

    You are an expert research reviewer.
    1. Summarize the abstract in 3–5 sentences.

    2. List 2–3 limitations the authors should address next.

    3. For each limitation, explain why it matters and suggest one experiment/analysis to resolve it.

      Abstract: [paste abstract]”

  18. Upcoming Proposal Calls (Spring 2026 Snapshot)

  19. NSF—FAIROS (Open Science Infrastructure)

    Link: https://odsr.illinois.edu/funding-opportunities/

    Summary:

    Supports projects that improve open science infrastructure (data/software/workflows) aligned with F.A.I.R. principles; relevant to AI-enabled research systems and reproducible computing. (Full proposals due Apr 8, 2026 per listing.)

  20. NSF—Science of Learning & Augmented Intelligence

    Link: https://odsr.illinois.edu/funding-opportunities/

    Summary:

    Funds research on learning + human–AI collaboration across cognitive/computational/social dimensions; strong fit for AI-in-education research. (Full proposals due Feb 11, 2026 per listing.)

  21. NIH—Computational Genomics & Data Science (R21)

    Link: https://odsr.illinois.edu/funding-opportunities/

    Summary:

    Exploratory projects at the intersection of data science and biomedical/genomics applications—often AI/ML-heavy. (Full proposals due Feb 16, 2026 per listing.)

  22. NSF—Mid-Career Advancement (MCA)

    Link: https://odsr.illinois.edu/funding-opportunities/

    Summary:

    Supports established researchers at a mid-career pivot point; often used for interdisciplinary expansion (including AI/data-intensive work). (Window Feb 1 – Mar 2, 2026 per listing.)

  23. NSF—Digital Twins Foundations (BioTech focus)

    Link: https://engineering.tufts.edu/research/funding-opportunities/

    Summary:

    Interdisciplinary program supporting mathematical/AI foundations for digital twins and synthetic data in biomedical contexts. (Full proposals due May 4, 2026 per listing.)

  24. AFOSR—Broad Agency Announcements (BAAs)

    Link: https://www.afrl.af.mil/About-Us/Fact-Sheets/Fact-Sheet-Display/Article/2282103/afosr-funding-opportunities/

    Summary:

    Rolling opportunities for basic research aligned with Air Force priorities, including AI/autonomy/advanced computation; typically benefits from early alignment with a Program Officer.

  25. NSF—NAIRR / AI Infrastructure (Planning signals)

    Link: https://www.nsf.gov/news

    Summary:

    NSF’s expanding AI research infrastructure efforts signal more 2026 opportunities tied to AI resources, data ecosystems, and operations planning as programs formalize.

  26. Grand Challenges

    Dear Colleague,

    We wanted to make sure members of the Grand Challenges community were aware of a new Request for Proposals (RFP) on "Evaluating AI-Enabled Decision Support Tools for Frontline Workers in Primary and Community Health Care Settings"

    This is the first RFP issued through the newly launched Evidence for AI in Health (EVAH) initiative, which is co-funded by the Gates Foundation, Novo Nordisk Foundation, and Wellcome and will be implemented in partnership with the Abdul Latif Jameel Poverty Action Lab and the African Population and Health Research Center.

    AI has the potential to transform many aspects of health care, but there are significant gaps in the availability of evidence on how AI tools perform in real-world health settings in low-and middle-income countries. The EVAH initiative aims to address that gap, with the first RFP focused on AI-enabled decision support tools designed to assist frontline workers with clinical tasks such as triage, diagnosis, and referral in primary and community health care settings in sub-Saharan Africa, South Asia, and Southeast Asia.

    The RFP will support two types of evaluations:

    • Pathway A: supports real-world evaluation of AI-enabled clinical decision support tools that are early in deployment. The pathway focuses on how the tools perform in practice, including usability, workflow integration, adoption, and safety, and supports research that can inform future impact evaluations. Grants of up to USD $1,000,000 will be awarded for Evaluation Pathway A projects, with a project term of 3-12 months.
    • Pathway B: supports rigorous impact evaluations of AI-enabled clinical decision support tools that are ready to be deployed at scale. This pathway focuses on measuring the effects of these tools on health outcomes and system performance at scale. Grants of up to USD $3,000,000 will be awarded for Evaluation Pathway B projects, with a project term of 12–24 months.

    Applications are due by April 1, 2026. Please read the RFP carefully for more information on the challenge and opportunity, eligibility, requirements, and timelines.

    Thank you for being a member of the Grand Challenges community. We invite you to share this opportunity with your networks.

    Sincerely,
    The Global Partnerships & Grand Challenges Team on behalf of the Novo Nordisk Foundation, Wellcome, and Gates Foundation