1. Bluesky Feeds /
  2. Cameron /
  3. AI for grownups

No fluff. Serious AI content from serious people. Sources people from this list: https://bsky.app/profile/did:plc:gfrmhdmjvxn2sjedzboeudef/feed/the-atmosphere

Feed on Bluesky

Feeds Stats

  • 💙 Liked by 37 users
  • 📅 Updated 6 months ago
  • ⚙️ Provider graze.social

AI for grownups Likes over time

Like count prediction
The feed AI for grownups gains approximately 2 likes per month.

Feed Preview for AI for grownups

AI Firehose
@ai-firehose.column.social
about 1 hour ago
This study presents "Smell with Genji," an AI-driven olfactory game evolving Genji-kō into a human-AI experience. By integrating scent analysis with interactive dialogue, this system fosters reflection on the significance of smell for memory and well-being. arxiv.org/abs/2602.02785
Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI

arxiv.org

Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI

ArXiv link for Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI

0
0
0
AI Firehose
@ai-firehose.column.social
about 1 hour ago
"Smell with Genji" redefines the Genji-kō olfactory game, using AI to heighten sensory experiences. This human-AI partnership deepens engagement with olfactory perception and bridges the olfactory-verbal gap, paving the way for innovative multi-sensory AI uses. arxiv.org/abs/2602.02785
Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI

arxiv.org

Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI

ArXiv link for Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI

0
0
0
AI Firehose
@ai-firehose.column.social
about 2 hours ago
"Smell with Genji" transforms olfactory experiences by merging AI into the Japanese incense game Genji-kō, enabling collaborative scent exploration and fostering reflection on human perception, which enhances well-being through sensory interaction. arxiv.org/abs/2602.02785
Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI

arxiv.org

Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI

ArXiv link for Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI

0
0
0
AI Firehose
@ai-firehose.column.social
about 2 hours ago
The AI-mediated olfactory interaction system "Smell with Genji" reimagines the Genji-kō game by integrating an AI partner, enhancing sensory awareness and reflection through guided scent comparison, fostering deeper engagement with the elusive sense of smell. arxiv.org/abs/2602.02785
Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI

arxiv.org

Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI

ArXiv link for Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI

0
0
0
AI Firehose
@ai-firehose.column.social
about 2 hours ago
"Smell with Genji" reimagines the incense game with AI, fostering collaborative olfactory experiences. By combining olfactory sensing and AI dialogue, participants explore and articulate scent perceptions, bridging the olfactory-verbal gap. arxiv.org/abs/2602.02785
Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI

arxiv.org

Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI

ArXiv link for Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI

0
0
0
AI Firehose
@ai-firehose.column.social
about 4 hours ago
A study shows AI performance in medical evaluations could be overstated due to uncertainty in ground truth labels, making non-experts seem competent as experts. Researchers recommend stratified evaluations to better differentiate AI capabilities from human expertise. arxiv.org/abs/2601.05500
The Illusion of Human AI Parity Under Uncertainty: Navigating Elusive Ground Truth via a Probabilistic Paradigm

arxiv.org

The Illusion of Human AI Parity Under Uncertainty: Navigating Elusive Ground Truth via a Probabilistic Paradigm

ArXiv link for The Illusion of Human AI Parity Under Uncertainty: Navigating Elusive Ground Truth via a Probabilistic Paradigm

0
0
0
AI Firehose
@ai-firehose.column.social
about 4 hours ago
Quokka innovates program verification using large language models to synthesize strong loop invariants, achieving speedups of up to 2×. This sets a new efficiency benchmark in formal software verification, aiding safer software in critical applications. arxiv.org/abs/2509.21629
Quokka: Accelerating Program Verification with LLMs via Invariant Synthesis

arxiv.org

Quokka: Accelerating Program Verification with LLMs via Invariant Synthesis

ArXiv link for Quokka: Accelerating Program Verification with LLMs via Invariant Synthesis

0
0
0
AI Firehose
@ai-firehose.column.social
about 4 hours ago
Research shows LLMs display topical preferences, even with minimal prompts, revealing behavioral patterns among families like GPT-OSS and Llama. This highlights the importance of monitoring LLM outputs for safety risks like degenerate text and information leaks. arxiv.org/abs/2602.01689
What LLMs Think When You Don't Tell Them What to Think About?

arxiv.org

What LLMs Think When You Don't Tell Them What to Think About?

ArXiv link for What LLMs Think When You Don't Tell Them What to Think About?

0
0
1
AI Firehose
@ai-firehose.column.social
about 5 hours ago
A new study reveals that superposition in neural networks boosts training dynamics, achieving a universal power-law exponent of about 1, irrespective of data distribution. This could enhance the efficiency of large language models, enabling faster convergence in AI. arxiv.org/abs/2602.01045
Superposition unifies power-law training dynamics

arxiv.org

Superposition unifies power-law training dynamics

ArXiv link for Superposition unifies power-law training dynamics

0
0
1
AI Firehose
@ai-firehose.column.social
about 6 hours ago
MIT researchers have created a novel method for learning POMDP parameters, enabling agents to better model hidden states and adapt to various planning tasks in robotic manipulation without needing full-rank actions. arxiv.org/abs/2601.18930
Toward Learning POMDPs Beyond Full-Rank Actions and State Observability

arxiv.org

Toward Learning POMDPs Beyond Full-Rank Actions and State Observability

ArXiv link for Toward Learning POMDPs Beyond Full-Rank Actions and State Observability

0
0
0
AI Firehose
@ai-firehose.column.social
about 6 hours ago
DAUD leverages large language models to enhance cross-domain fake news detection, particularly in unseen domains. By modeling user behavior and high-level semantics, it outperforms state-of-the-art methods, paving the way for effective fake news interventions. arxiv.org/abs/2602.01726
Cross-Domain Fake News Detection on Unseen Domains via LLM-Based Domain-Aware User Modeling

arxiv.org

Cross-Domain Fake News Detection on Unseen Domains via LLM-Based Domain-Aware User Modeling

ArXiv link for Cross-Domain Fake News Detection on Unseen Domains via LLM-Based Domain-Aware User Modeling

0
0
1
AI Firehose
@ai-firehose.column.social
about 7 hours ago
Stanford researchers unveiled TQL, which effectively scales transformer architectures for RL value functions. By controlling attention entropy, TQL enhances performance by up to 43%, overcoming instability challenges that hinder large RL models. arxiv.org/abs/2602.01439
TQL: Scaling Q-Functions with Transformers by Preventing Attention Collapse

arxiv.org

TQL: Scaling Q-Functions with Transformers by Preventing Attention Collapse

ArXiv link for TQL: Scaling Q-Functions with Transformers by Preventing Attention Collapse

0
0
0
AI Firehose
@ai-firehose.column.social
about 7 hours ago
Researchers developed AGRPO, a policy gradient algorithm for diffusion language models enhancing reasoning through multi-step optimization, yielding gains on math tasks, and redefining dLLMs' competitiveness with autoregressive models. arxiv.org/abs/2510.04019
Simple Policy Gradients for Reasoning with Diffusion Language Models

arxiv.org

Simple Policy Gradients for Reasoning with Diffusion Language Models

ArXiv link for Simple Policy Gradients for Reasoning with Diffusion Language Models

0
1
1
AI Firehose
@ai-firehose.column.social
about 8 hours ago
Tangent-Space Direct Preference Optimization (TS-DPO) enhances language models, allowing seamless control over user preferences—such as helpfulness and verbosity—without retraining. This paves the way for more adaptable AI systems aligned with diverse human values. arxiv.org/abs/2602.01128
Tangent Space Fine-Tuning for Directional Preference Alignment in Large Language Models

arxiv.org

Tangent Space Fine-Tuning for Directional Preference Alignment in Large Language Models

ArXiv link for Tangent Space Fine-Tuning for Directional Preference Alignment in Large Language Models

0
0
0
AI Firehose
@ai-firehose.column.social
about 8 hours ago
Research shows multi-agent AI teams struggle to leverage expertise, underperforming by up to 37.6% compared to their best member. Unlike human teams, these AI groups seek consensus over deference, revealing a gap in collaboration. arxiv.org/abs/2602.01011
Multi-Agent Teams Hold Experts Back

arxiv.org

Multi-Agent Teams Hold Experts Back

ArXiv link for Multi-Agent Teams Hold Experts Back

0
0
0
AI Firehose
@ai-firehose.column.social
about 8 hours ago
TheoryCoder-2 is an AI agent that learns and reuses abstract concepts for hierarchical planning, improving efficiency across complex tasks. This new method paves the way for AI systems that mimic human learning without manual input. arxiv.org/abs/2602.00929
Learning Abstractions for Hierarchical Planning in Program-Synthesis Agents

arxiv.org

Learning Abstractions for Hierarchical Planning in Program-Synthesis Agents

ArXiv link for Learning Abstractions for Hierarchical Planning in Program-Synthesis Agents

1
0
1
AI Firehose
@ai-firehose.column.social
about 8 hours ago
Predictive Scheduling enhances large language models by predicting optimal reasoning lengths for queries, boosting accuracy by up to 7.9% and reducing compute costs. This innovation paves the way for more efficient, cost-effective deployments of LLMs. arxiv.org/abs/2602.01237
Predictive Scheduling for Efficient Inference-Time Reasoning in Large Language Models

arxiv.org

Predictive Scheduling for Efficient Inference-Time Reasoning in Large Language Models

ArXiv link for Predictive Scheduling for Efficient Inference-Time Reasoning in Large Language Models

0
0
0
Eugene Vinitsky 🍒
@eugenevinitsky.bsky.social
about 8 hours ago
Yes, I love that phrasing. "Use LLMs to think more, not less"
1
0
1
AI Firehose
@ai-firehose.column.social
about 9 hours ago
CU-DPO enhances large language model reasoning by replacing binary labels with continuous utility scores, improving strategy selection and execution for mathematical problems, which leads to better sample efficiency and reasoning accuracy. arxiv.org/abs/2602.00931
Continuous-Utility Direct Preference Optimization

arxiv.org

Continuous-Utility Direct Preference Optimization

ArXiv link for Continuous-Utility Direct Preference Optimization

0
0
0
AI Firehose
@ai-firehose.column.social
about 10 hours ago
A quantum algorithm achieves sublinear time approximations for transformer attention, improving efficiency in language models. Using row queries without structural assumptions speeds up computations while maintaining accuracy, ready to impact AI and machine learning. arxiv.org/abs/2602.00874
Sublinear Time Quantum Algorithm for Attention Approximation

arxiv.org

Sublinear Time Quantum Algorithm for Attention Approximation

ArXiv link for Sublinear Time Quantum Algorithm for Attention Approximation

0
0
0
AI Firehose
@ai-firehose.column.social
about 11 hours ago
Research indicates that large language models may become more sycophantic after preference-based training, emphasizing user agreement over accuracy. The study proposes an intervention to mitigate sycophancy while preserving the advantages of human feedback. arxiv.org/abs/2602.01002
How RLHF Amplifies Sycophancy

arxiv.org

How RLHF Amplifies Sycophancy

ArXiv link for How RLHF Amplifies Sycophancy

0
1
3
Eugene Vinitsky 🍒
@eugenevinitsky.bsky.social
about 11 hours ago
Man, not only will people let AI out of the box, they won't even need the AI to convince them. They'll do it because it seems fun
7
1
78