arxiv.org
Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI
ArXiv link for Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI
No fluff. Serious AI content from serious people. Sources people from this list: https://bsky.app/profile/did:plc:gfrmhdmjvxn2sjedzboeudef/feed/the-atmosphere
Feed on Bluesky
arxiv.org
Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI
ArXiv link for Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI

arxiv.org
Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI
ArXiv link for Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI

arxiv.org
Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI
ArXiv link for Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI

arxiv.org
Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI
ArXiv link for Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI

arxiv.org
Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI
ArXiv link for Smell with Genji: Rediscovering Human Perception through an Olfactory Game with AI

arxiv.org
The Illusion of Human AI Parity Under Uncertainty: Navigating Elusive Ground Truth via a Probabilistic Paradigm
ArXiv link for The Illusion of Human AI Parity Under Uncertainty: Navigating Elusive Ground Truth via a Probabilistic Paradigm

arxiv.org
Quokka: Accelerating Program Verification with LLMs via Invariant Synthesis
ArXiv link for Quokka: Accelerating Program Verification with LLMs via Invariant Synthesis

github.com
GitHub - agno-agi/dash: Self-learning data agent that grounds its answers in 6 layers of context. Inspired by OpenAI's in-house implementation.
Self-learning data agent that grounds its answers in 6 layers of context. Inspired by OpenAI's in-house implementation. - agno-agi/dash

arxiv.org
What LLMs Think When You Don't Tell Them What to Think About?
ArXiv link for What LLMs Think When You Don't Tell Them What to Think About?

arxiv.org
Superposition unifies power-law training dynamics
ArXiv link for Superposition unifies power-law training dynamics

arxiv.org
Toward Learning POMDPs Beyond Full-Rank Actions and State Observability
ArXiv link for Toward Learning POMDPs Beyond Full-Rank Actions and State Observability

arxiv.org
Cross-Domain Fake News Detection on Unseen Domains via LLM-Based Domain-Aware User Modeling
ArXiv link for Cross-Domain Fake News Detection on Unseen Domains via LLM-Based Domain-Aware User Modeling

arxiv.org
TQL: Scaling Q-Functions with Transformers by Preventing Attention Collapse
ArXiv link for TQL: Scaling Q-Functions with Transformers by Preventing Attention Collapse

arxiv.org
Simple Policy Gradients for Reasoning with Diffusion Language Models
ArXiv link for Simple Policy Gradients for Reasoning with Diffusion Language Models

gigazine.net
AppleがXcodeとコーディングエージェントのClaude AgentやOpenAI Codexとの統合を発表、さらにMCPにも対応
AppleがXcode 26.3のリリース候補(RC)版を2026年2月4日にリリースし、コーディング可能なAIエージェントをより強くサポートする「エージェンティックコーディング」の実装を発表しました。これにより、AnthropicのClaude AgentやOpenAIのCodexなどのコーディングエージェントがIDE(統合開発環境)内で直接自律的にコーディングを行なうことが可能になります。

arxiv.org
Tangent Space Fine-Tuning for Directional Preference Alignment in Large Language Models
ArXiv link for Tangent Space Fine-Tuning for Directional Preference Alignment in Large Language Models

arxiv.org
Multi-Agent Teams Hold Experts Back
ArXiv link for Multi-Agent Teams Hold Experts Back

arxiv.org
Learning Abstractions for Hierarchical Planning in Program-Synthesis Agents
ArXiv link for Learning Abstractions for Hierarchical Planning in Program-Synthesis Agents

arxiv.org
Predictive Scheduling for Efficient Inference-Time Reasoning in Large Language Models
ArXiv link for Predictive Scheduling for Efficient Inference-Time Reasoning in Large Language Models


arxiv.org
Continuous-Utility Direct Preference Optimization
ArXiv link for Continuous-Utility Direct Preference Optimization

arxiv.org
Sublinear Time Quantum Algorithm for Attention Approximation
ArXiv link for Sublinear Time Quantum Algorithm for Attention Approximation

arxiv.org
How RLHF Amplifies Sycophancy
ArXiv link for How RLHF Amplifies Sycophancy
