Skip to main content

Newsletter #4

September

Please send email to alromero@mail.wvu.edu if you want something to be included

WVU News

  1. Call for Proposals: Exploring Open-Access LLMs for Research

    The Research Office and HPC Team are excited to announce the launch of a pilot system hosting multiple open-access large language models (LLMs). This initiative is designed to give our research community the opportunity to experiment with cutting-edge AI tools in a secure, campus-supported environment.

    We invite faculty members to submit short proposals (1–2 paragraphs) outlining how they would like to use these models in their research. The pilot will remain open through December 2025, during which we hope to:

    • Explore diverse research applications of LLMs across disciplines
    • Identify difficulties and barriers to adoption
    • Collect valuable feedback to shape future HPC-AI resources

    The system will be available in the coming days. It features a web interface similar to ChatGPT, allowing you to select and interact with different open models. At the conclusion of your testing, we request a one-page report summarizing your experience, outcomes, and feedback.

    Key Details

    • Who can apply: Faculty (research proposals only)
    • Proposal length: 1–2 paragraphs
    • Pilot period: Until December 2025
    • Deliverable: One-page report at the end of your trial

    This is a unique opportunity to shape the integration of LLMs into our research ecosystem while influencing the direction of future HPC services.

    Submit your proposal today and be among the first to explore the future of research with LLMs.

  2. Van Liere & WV IDeA Research Conference

    The conference is set for October 17–18, 2025. On Oct 17 (Day 1), Ashwini Davison, MD, FACP, FAMIA will deliver the keynote presentation. Dr Davison is the Chief Medical Informatics Officer (CMIO) for Oncology at Amazon Web Services (AWS)

  3. WVU Libraries: “Artificial Intelligence: Shaping Futures, Impacting Lives” Exhibit & Panel

    The WVU Libraries (in collaboration with the Humanities Center) are hosting a panel discussion to launch the cross-campus exhibition Artificial Intelligence: Shaping Futures, Impacting Lives on Tuesday, September 30, 2025 at 4:00 PM in the Downtown Library Robinson Reading Room.

    The exhibition will be on display across Downtown, Evansdale, and Health Sciences Libraries and includes contributions from students, scholars, and the community; also curated with the help of AI tools (with human edits).

  4. WVU AI Community Update: Mailing List Growth

    What started in 2024 with just 15 faculty subscribers has grown into a vibrant community of 85+ faculty & administrators on our mailing list today. Thank you all for your interest, your contributions, and for helping make this group meaningful. If you know someone who should join, send them our way—or tell them to drop a line to alromero@mail.wvu.edu.

News of AI around the World

Recent Highlights

  1. Create Custom Soundtracks with AI — royalty-free

    Want a soundtrack tailored just for your project, without worrying about licensing? Check out TemPolor, an AI music generator that lets you create royalty-free tracks in seconds. Choose by genre, mood, tempo, or even upload text or MIDI to generate your own pieces. Perfect for video content, podcasts, or background audio.

    Learn more / try it here: TemPolor – AI Music Generator (royalty-free)

  2. OpenAI & Oracle’s $300 Billion Cloud Deal

    In one of the largest cloud-computing agreements ever, OpenAI has committed to purchasing $300 billion in computing power from Oracle over roughly five years, beginning in 2027. The deal is part of the broader Stargate Project, which already involves massive investments in AI infrastructure including power, data centers, and chip design.

    Some highlights:

    • The infrastructure will require ~ 4.5 gigawatts of power — that’s enough to power millions of homes.
    • Oracle recorded over $317 billion in future contract revenue, boosted partly by this deal.
    • OpenAI also seems to be pushing forward on designing its own AI chip via a separate multi-billion dollar contract with Broadcom.

    Why it matters: this signals a major escalation in AI infrastructure investment. It also shows how AI companies are diversifying beyond existing cloud providers (e.g. Microsoft Azure) to ensure capacity and scalability. This kind of deal shapes where and how big the future of large-scale AI tools will be built.

  3. Meta & OpenAI tighten guardrails for teens

    After a Reuters investigation revealed Meta’s internal policy once allowed “romantic or sensual” chats with minors, Meta says it’s retraining chatbots to avoid those topics and to steer away from self-harm discussions (with more controls coming). OpenAI announced parental controls and will route sensitive chats (e.g., acute distress) to its reasoning models; broader safeguards are rolling out. Meanwhile, the FTC opened an inquiry into consumer chatbots from Meta, OpenAI and others.

  4. “AI students” aren’t like real students (and they ace the test)

    A new study compared 11 LLMs (as simulated grade-level students) against NAEPmath/reading responses. The models typically outperform real students and fail differently, so using LLMs as student stand-ins can mislead teachers and tool designers.

  5. Why 95% of enterprise AI pilots fizzle (and what to do about it)

    MIT Media Lab’s Project NANDA finds only 5% of custom GenAI pilots reach production with measurable P&L impact; most stall on integration, brittle workflows, and weak “learning” in context. McKinsey echoes the macro picture: >80% of companies report no material earnings impact yet. Translation: target narrow, high-leverage workflows and build for compounding value, not demos.

  6. Sam’s Club puts GenAI on the sales floor; Walmart taps OpenAI Certification

    Sam’s Club is arming frontline managers with enterprise-grade GenAI to shrink routine work and speed decisions. Separately, Walmart will offer the new OpenAI Certification to U.S. associates, part of OpenAI’s push to certify 10M Americans by 2030.

  7. Anthropic’s $1.5B author settlement (pending court approval)

    Anthropic agreed to pay $1.5B to settle a class action alleging training on pirated books; the deal includes destroying those datasets and ~ $3,000 per book compensation. A judge has scrutinized aspects of the agreement, but if approved it would be the largest copyright recovery on record and a major precedent for AI training data.

  8. Claude can now create & edit files (Excel, PowerPoint, Docs, PDFs)

    Anthropic rolled out a feature that outputs real files directly from chat (and can also edit uploads) in Claude.ai and the desktop app—now in preview for Max/Team/Enterprise, with Pro “coming soon.” Handy for turning data + instructions into ready-to-use deliverables.

AI Research in Education – Fresh Insights

  1. UNESCO survey: How universities are using & governing AI

    What

    Survey by UNESCO’s Digital Learning Week (2-5 September) querying ~400 respondents (UNESCO Chairs / UNITWIN Networks) across 90 countries.

    Findings

    • ~90% of respondents use AI tools in their work (for research, writing)
    • ~50% experimenting with AI in teaching (lesson planning, grading-support, detection of plagiarism)
    • But confidence is uneven: many don’t feel well prepared or clear about the ethical/social implications, and only ~19% have formal policy, ~42% are developing guidance frameworks.

    Why it matters

    It shows institutional adoption is widespread but policy/practice lags. This gap is an opening for groups/institutions to help build resources, training, ethical frameworks.

  2. Google Research’s “AI Quests” for middle school students (ages 11-14)

    What

    A new interactive learning experience (“AI Quests”) developed by Google Research with Stanford Accelerator for Learning, aimed at teaching AI literacy through real-world problem solving, beginning with flood forecasting. Released ~9 September 2025.

    Findings

    • Students play through quests: define problems, gather data, train/test models
    • Curriculum-friendly, hands-on, suitable for global deployment
    • Designed to integrate with AI literacy initiatives (DeepMind, Raspberry Pi etc.)

    Why it matters

    Moves beyond passive learning about AI to doing AI; good model for K-12 schools to adopt project-based AI education.

  3. Educators are increasingly using AI — even for grading

    What

    Analysis by Anthropic (via Axios) of ~74,000 anonymized educator-chatbot conversation logs from May-June 2025.

    Findings

    • 57% of faculty used AI for curriculum development, 13% for research, and ~7% for assessing student performance.
    • In grading-conversations, nearly half (48.9%) fully delegated the task to the AI. Raises concerns about fairness, integrity, understanding of limitations.

    Why it matters

    Shows how educator behaviour is shifting — it’s not just student use of AI but increasingly teacher delegations. That creates policy/assessment/ethics challenges to address.

  4. Deep Read: AI-Enabled Test Practice & Its Surprising Effects

    A new large-scale observational study (N = 25,969) examined how practice tests created via automated item-generation (AIG) affect performance, confidence, and admission behavior among Duolingo English Test takers. Key findings: taking 1-3 practice testscorrelates with better DET scores and higher confidence; doing more than 3 may actually lead to lower performance, potentially because of over-practicing or using tests in non-optimal ways. This has important implications for how educators structure test prep and calibrate student expectations.

    Read more: Exploring AI-Enabled Test Practice, Affect, and Test Outcomes in Language Assessment (arXiv)

  5. Deep Read: Teaching Students to Critique AI Outputs

    In introductory Computational & Data Science courses, this pilot study asks: How well do students evaluate the correctness or appropriateness of AI-generated solutions? Students were given flawed or partial AI answers, asked to analyze, critique, and revise them. Findings show that while many students struggle, structured exercises help build critical thinking skills when AI is used as a learning partner rather than just a shortcut. A valuable model for designing AI-aware classrooms.

    Read more: Pilot Study on Generative AI and Critical Thinking in Higher Education Classrooms (arXiv)

Selected AI research breakthroughs

  1. “The AI breakthrough that uses almost no power to create images”

    What

    Researchers (UCLA et al.) introduced a generative image model (diffusion-based) that produces colorful, high-quality images with very low energy consumption. The model reduces power needs sharply compared to standard diffusion models with many time steps.

    Why it matters

    As image generation gets more widespread, power and energy usage are major costs and environmental concerns. This makes it more feasible to run powerful generative models in constrained settings (edge devices, lower-cost infrastructure).

    Read more: Nature article on low-power image generation

  2. “AI tool detects LLM-generated text in research papers and peer reviews”

    What

    A publisher (AACR – American Association for Cancer Research) used a tool from Pangram Labs to analyze tens of thousands of manuscripts and peer review reports (2021-2024). They found that ~23% of abstracts and ~5% of peer review reports had suspected LLM-generated text. Importantly, many of the authors/reviewers did not disclose using AI, even when submissions required it.

    Why it matters

    Raises concerns about transparency, research integrity, and whether peer review / academic publication norms are keeping up with reality. Helps universities/journals think about policy, disclosure, detection.

    Read more: Nature article: “AI tool detects LLM-generated text…”

  3. “Medical foundation models: building truly global medical AI” (Nature Medicine)

    What

    An overview of the current state of medical foundation models (large AI models trained on broad medical data), their successes (pathology, radiology, ophthalmology), and major challenges (data diversity, privacy, regulation, ethics).

    Why it matters

    Medical AI is one of the highest-stakes domains. Foundation models promise lots of flexibility but also bring high risk if not done carefully. This pulls together knowledge for medical researchers about where things stand.

    Read more: Nature Medicine: Building the world’s first truly global medical foundation models

  4. “One-fifth of computer science papers may include AI content”

    What

    A study found a strong increase in use of AI-generated content (text) in CS papers across many disciplines post-ChatGPT. About 20% of recent CS papers might include some AI-generated text.

    Why it matters

    Academic writing is being transformed; norms around authorship, disclosure, originality, plagiarism are shifting. This could affect how education and research are done and assessed.

    Read more: Science article: “One-fifth of computer science papers…”

  5. “Training of physical neural networks”

    What

    A review article exploring physical neural networks (PNNs) — hardware implementations or physical analogs that map neural network computations into physical systems (optical, electronic, etc.). The aim is to boost energy efficiency, reduce reliance on purely digital compute, and potentially achieve performance advantages in specific tasks.

    Why it matters

    If AI can move from purely digital emulations to more efficient physical substrates, that’s big for scaling, for environmental impact, and for deploying AI in resource-limited settings.

    Read more: Nature: Training of Physical Neural Networks

  6. “Stop Evaluating AI with Human Tests — Develop Principled, AI-specific Tests instead”

    What

    This paper argues that evaluating LLMs via tests designed for humans (IQ tests, psychological or personality tests, standardized exams) is misleading. Such tests are calibrated for human populations, based on assumptions about cognition that may not apply to AI (biases, data contamination, prompt tricks). The authors propose we need AI-centric evaluation frameworks: benchmarks and metrics made for AI, not just repurposed human ones.

    Why it matters

    Helps us think critically about statements that claim “AI is as smart as humans,” “AI passed X exam,” etc. Suggests our hype and assumptions may be misled by the metrics we use. Very relevant if you’re thinking about “alternative assessment,” AI vs student intelligence, etc.

    Read more: arXiv: “Stop Evaluating AI with Human Tests …”

Upcoming: Proposal Calls

  1. PRIMED-AI call from Grants.gov / NIH

    PRIMED-AI stands for Precision Medicine with Artificial Intelligence: Integrating Imaging with Multimodal Data , a Common Fund program under NIH. Its goal is to fund development of AI-based clinical decision support (CDS) tools that combine medical imaging with non-imaging health data (e.g. genomic, clinical, lab data) to improve personalized medicine for chronic and other health conditions.

    NIH approved the concept in April 2025. The program includes strategic planning, landscape analyses, workshops, and now is moving toward issuing multiple funding opportunity announcements (FOAs). These FOAs are forecasted but not yet open.

  2. OpenAI “People-First AI Fund” ($50M for nonprofits & community-led projects)

    OpenAI has opened applications for grants (Sept 8 → Oct 8, 2025) to support nonprofit or mission-focused organizations in AI literacy, public education, community innovation, service delivery, etc. Applicants need not already use AI tools. Deadline: Oct 8, 2025.

    Who can apply: US-based nonprofits (501(c)(3)).

  3. Humanities Research Centers on Artificial Intelligence (NEH, USA)

    NEH is funding proposals to establish new humanities research centers that will focus on AI and its cultural, societal, historical implications. The program supports sustained collaboration among scholars, plus activities like workshops, lecture series, curriculum development, etc.

    Deadline: October 1, 2025.

    Funding amount: Up to ~$500,000 + matching funds in some cases over 3 years.

See you on September 26!