1. Bluesky Feeds /
  2. Eryk Salvaggio /
  3. PaperSky: New AI Papers

Posts to AI papers, or people sharing explainer threads of research. Be mindful of hype and aware that many are pre-prints (ie, not peer reviewed). To guarantee your post shows up here use #papersky. Created by @eryk.bsky.social.

Feed on Bluesky

Feeds Stats

  • πŸ’™ Liked by 6 users
  • πŸ“… Updated 10 months ago
  • βš™οΈ Provider blueskyfeedcreator.com

PaperSky: New AI Papers Likes over time

Like count prediction
The feed PaperSky: New AI Papers has not gained any likes in the last month.

Feed Preview for PaperSky: New AI Papers

David AI10bro
@ai10bro.bsky.social
2 minutes ago
Like a beehive refining nectar into pure honey, SimpleQA Verified distills complex language models into clear measures of factual accuracy. This 1,000-prompt benchmark filters through noise and bias, offering a clearer path forward with Gemini 2. πŸ”— Source: arxiv.org/abs/2509.07968v1
0
0
0
AI Research Updates | arXiv cs.AI
@arxiv-cs-ai.bsky.social
20 minutes ago
CLEAR: A Comprehensive Linguistic Evaluation of Argument Rewriting by Large Language Models Read more: arxiv.org/html/2509.15027v1
0
0
0
AI Firehose
@ai-firehose.column.social
20 minutes ago
The novel EVOREASONER framework enhances LLMs' dynamic question-answering capabilities by incorporating temporal reasoning and knowledge graph evolution, markedly improving performance and enabling smaller models to compete with vastly larger ones. arxiv.org/abs/2509.15464
Temporal Reasoning with Large Language Models Augmented by Evolving Knowledge Graphs

arxiv.org

Temporal Reasoning with Large Language Models Augmented by Evolving Knowledge Graphs

ArXiv link for Temporal Reasoning with Large Language Models Augmented by Evolving Knowledge Graphs

0
0
0
Hackernews Top Stories
@news.facts.dev
43 minutes ago
⚑ Hackernews Top story: Paper2Agent: Stanford Reimagining Research Papers as Interactive AI Agents
Paper2Agent: Reimagining Research Papers As Interactive and Reliable AI Agents

arxiv.org

Paper2Agent: Reimagining Research Papers As Interactive and Reliable AI Agents

We introduce Paper2Agent, an automated framework that converts research papers into AI agents. Paper2Agent transforms research output from passive artifacts into active systems that can accelerate downstream use, adoption, and discovery. Conventional research papers require readers to invest substantial effort to understand and adapt a paper's code, data, and methods to their own work, creating barriers to dissemination and reuse. Paper2Agent addresses this challenge by automatically converting a paper into an AI agent that acts as a knowledgeable research assistant. It systematically analyzes the paper and the associated codebase using multiple agents to construct a Model Context Protocol (MCP) server, then iteratively generates and runs tests to refine and robustify the resulting MCP. These paper MCPs can then be flexibly connected to a chat agent (e.g. Claude Code) to carry out complex scientific queries through natural language while invoking tools and workflows from the original paper. We demonstrate Paper2Agent's effectiveness in creating reliable and capable paper agents through in-depth case studies. Paper2Agent created an agent that leverages AlphaGenome to interpret genomic variants and agents based on ScanPy and TISSUE to carry out single-cell and spatial transcriptomics analyses. We validate that these paper agents can reproduce the original paper's results and can correctly carry out novel user queries. By turning static papers into dynamic, interactive AI agents, Paper2Agent introduces a new paradigm for knowledge dissemination and a foundation for the collaborative ecosystem of AI co-scientists.

0
0
0
Paper
@paper.bsky.social
about 1 hour ago
Top 30 most popular arXiv papers in the last 30 days. [1/30] [2/30] [3/30] [4/30] [5/30] [6/30] [7/30] [8/30] [9/30] [10/30] [11/30] [12/30] [13/30] [14/30] [15/30] [16/30] [17/30] [18/30] [19/30] [20/30] [21/30] [22/30] [23/30] [24/30] [25/30] [26/30] [27/30] [28/30] [29/30] [30/30]
1/30 https://arxiv.org/abs/2508.21038
2/30 https://arxiv.org/abs/2508.18255
3/30 https://arxiv.org/abs/2508.18265
4/30 https://arxiv.org/abs/2508.18475
5/30 https://arxiv.org/abs/2509.03065
6/30 https://arxiv.org/abs/2509.08721
7/30 https://arxiv.org/abs/2509.06503
8/30 https://arxiv.org/abs/2509.04664
9/30 https://arxiv.org/abs/2509.06818
10/30 https://arxiv.org/abs/2509.02547
11/30 https://arxiv.org/abs/2509.08827
12/30 https://arxiv.org/abs/2509.02544
13/30 https://arxiv.org/abs/2508.18106
14/30 https://arxiv.org/abs/2509.07295
15/30 https://arxiv.org/abs/2509.04259
16/30 https://arxiv.org/abs/2508.21141
17/30 https://arxiv.org/abs/2509.12337
18/30 https://arxiv.org/abs/2509.02333
19/30 https://arxiv.org/abs/2509.11481
20/30 https://arxiv.org/abs/2508.21112
21/30 https://arxiv.org/abs/2508.20722
22/30 https://arxiv.org/abs/2509.09372
23/30 https://arxiv.org/abs/2509.09677
24/30 https://arxiv.org/abs/2509.09674
25/30 https://arxiv.org/abs/2509.06160
26/30 https://arxiv.org/abs/2508.19201
27/30 https://arxiv.org/abs/2509.13310
28/30 https://arxiv.org/abs/2508.21148
29/30 https://arxiv.org/abs/2509.10147
30/30 https://arxiv.org/abs/2508.17445
0
0
0
@shuberfuber.bsky.social
about 1 hour ago
Nah. It's all because of one paper. arxiv.org/abs/1706.03762 The entire explosion of modern generative AI is because of that paper.

arxiv.org

0
0
0
GreenEngineer
@eubanksengineeringresearch.com
about 1 hour ago
It is perhaps not fully acknowledged, but neither is it unknown. arxiv.org/pdf/2503.22919

arxiv.org

1
0
0
AI Firehose
@ai-firehose.column.social
about 2 hours ago
This study presents Img2CAD, enabling the reverse engineering of 3D CAD models from images via vision-language models and attribute prediction. It transforms manufacturing editing, allowing intuitive modifications and showcasing potential for real-world applications. arxiv.org/abs/2408.01437
Img2CAD: Reverse Engineering 3D CAD Models from Images through VLM-Assisted Conditional Factorization

arxiv.org

Img2CAD: Reverse Engineering 3D CAD Models from Images through VLM-Assisted Conditional Factorization

ArXiv link for Img2CAD: Reverse Engineering 3D CAD Models from Images through VLM-Assisted Conditional Factorization

0
0
0
Hacker News 20
@betterhn20.e-work.xyz
about 2 hours ago
Paper2Agent: Stanford Reimagining Research Papers as Interactive AI Agents arxiv.org/abs/2509.06917 (news.ycombinator.com/item…)

arxiv.org

Paper2Agent: Reimagining Research Papers As Interactive and Reliable AI Agents

We introduce Paper2Agent, an automated framework that converts research papers into AI agents. Paper2Agent transforms research output from passive artifacts into active systems that can accelerate downstream use, adoption, and discovery. Conventional research papers require readers to invest substantial effort to understand and adapt a paper's code, data, and methods to their own work, creating barriers to dissemination and reuse. Paper2Agent addresses this challenge by automatically converting a paper into an AI agent that acts as a knowledgeable research assistant. It systematically analyzes the paper and the associated codebase using multiple agents to construct a Model Context Protocol (MCP) server, then iteratively generates and runs tests to refine and robustify the resulting MCP. These paper MCPs can then be flexibly connected to a chat agent (e.g. Claude Code) to carry out complex scientific queries through natural language while invoking tools and workflows from the original paper. We demonstrate Paper2Agent's effectiveness in creating reliable and capable paper agents through in-depth case studies. Paper2Agent created an agent that leverages AlphaGenome to interpret genomic variants and agents based on ScanPy and TISSUE to carry out single-cell and spatial transcriptomics analyses. We validate that these paper agents can reproduce the original paper's results and can correctly carry out novel user queries. By turning static papers into dynamic, interactive AI agents, Paper2Agent introduces a new paradigm for knowledge dissemination and a foundation for the collaborative ecosystem of AI co-scientists.

0
0
0
Marius Schneider
@mariusschneider.bsky.social
about 2 hours ago
🚨Our NeurIPS 2025 competition Mouse vs. AI is LIVE! We combine a visual navigation task + large-scale mouse neural data to test what makes visual RL agents robust and brain-like. Top teams: featured at NeurIPS + co-author our summary paper. Join the challenge! Whitepaper: arxiv.org/abs/2509.14446
Mouse vs. AI: A Neuroethological Benchmark for Visual Robustness and Neural Alignment

arxiv.org

Mouse vs. AI: A Neuroethological Benchmark for Visual Robustness and Neural Alignment

Visual robustness under real-world conditions remains a critical bottleneck for modern reinforcement learning agents. In contrast, biological systems such as mice show remarkable resilience to environ...

2
10
15
Marius Schneider
@mariusschneider.bsky.social
about 2 hours ago
🚨Our NeurIPS 2025 competition Mouse vs. AI is LIVE! We combine a visual navigation task + large-scale mouse neural data to test what makes visual RL agents robust and brain-like. Top teams: featured at NeurIPS + co-author our summary paper. Join the challenge! Whitepaper: arxiv.org/abs/2509.14446
Mouse vs. AI: A Neuroethological Benchmark for Visual Robustness and Neural Alignment

arxiv.org

Mouse vs. AI: A Neuroethological Benchmark for Visual Robustness and Neural Alignment

Visual robustness under real-world conditions remains a critical bottleneck for modern reinforcement learning agents. In contrast, biological systems such as mice show remarkable resilience to environ...

2
10
15
Marius Schneider
@mariusschneider.bsky.social
about 2 hours ago
Submit your model. Compete against mice. Which model architectures solve the task, and which find brain-like solutions? Let us uncover what it takes to build robust, biologically inspired agents! Read the whitepaper: arxiv.org/abs/2509.14446 Explore the challenge: robustforaging.github.io
1
0
3
Marius Schneider
@mariusschneider.bsky.social
about 2 hours ago
Submit your model. Compete against mice. Which model architectures solve the task, and which find brain-like solutions? Let us uncover what it takes to build robust, biologically inspired agents! Read the whitepaper: arxiv.org/abs/2509.14446 Explore the challenge: robustforaging.github.io
1
0
3
Amy Serrat
@amyctserrat.bsky.social
about 2 hours ago
LLMs Unemployment "Occupations more exposed to LLMs experience notable earnings gains ... we find no systematic changes in unemployment, which remains relatively low for exposed occupations both before and after LLM adoption." #ai #science #llm #employ #economics #policy arxiv.org/abs/2509.15510
The (Short-Term) Effects of Large Language Models on Unemployment and Earnings

arxiv.org

The (Short-Term) Effects of Large Language Models on Unemployment and Earnings

Large Language Models have spread rapidly since the release of ChatGPT in late 2022, accompanied by claims of major productivity gains but also concerns about job displacement. This paper examines the...

0
3
0
AI Research Updates | arXiv cs.AI
@arxiv-cs-ai.bsky.social
about 2 hours ago
Large Language Models for Security Operations Centers: A Comprehensive Survey Read more: arxiv.org/html/2509.10858v1
0
0
0
AI Firehose
@ai-firehose.column.social
about 2 hours ago
A study introduces REFER, a frequency-based prompting method that enhances fairness in summarising opinions by language models. Results show notable improvements in viewpoint representation, suggesting this technique could reshape AI's approach to bias in opinions. arxiv.org/abs/2509.15723
REFER: Mitigating Bias in Opinion Summarisation via Frequency Framed Prompting

arxiv.org

REFER: Mitigating Bias in Opinion Summarisation via Frequency Framed Prompting

ArXiv link for REFER: Mitigating Bias in Opinion Summarisation via Frequency Framed Prompting

0
0
0
Hacker News Top Stories
@hackernewsbot.bsky.social
about 2 hours ago
Paper2Agent: Stanford Reimagining Research Papers as Interactive AI Agents | Discussion

arxiv.org

Paper2Agent: Reimagining Research Papers As Interactive and Reliable AI Agents

We introduce Paper2Agent, an automated framework that converts research papers into AI agents. Paper2Agent transforms research output from passive artifacts into active systems that can accelerate downstream use, adoption, and discovery. Conventional research papers require readers to invest substantial effort to understand and adapt a paper's code, data, and methods to their own work, creating barriers to dissemination and reuse. Paper2Agent addresses this challenge by automatically converting a paper into an AI agent that acts as a knowledgeable research assistant. It systematically analyzes the paper and the associated codebase using multiple agents to construct a Model Context Protocol (MCP) server, then iteratively generates and runs tests to refine and robustify the resulting MCP. These paper MCPs can then be flexibly connected to a chat agent (e.g. Claude Code) to carry out complex scientific queries through natural language while invoking tools and workflows from the original paper. We demonstrate Paper2Agent's effectiveness in creating reliable and capable paper agents through in-depth case studies. Paper2Agent created an agent that leverages AlphaGenome to interpret genomic variants and agents based on ScanPy and TISSUE to carry out single-cell and spatial transcriptomics analyses. We validate that these paper agents can reproduce the original paper's results and can correctly carry out novel user queries. By turning static papers into dynamic, interactive AI agents, Paper2Agent introduces a new paradigm for knowledge dissemination and a foundation for the collaborative ecosystem of AI co-scientists.

0
0
2
HackerNews Compilator
@hnbot.gsuscs.xyz
about 3 hours ago
Paper2Agent: Stanford Reimagining Research Papers as Interactive AI Agents arxiv.org/abs/2509.06917
0
0
0
HackerNewsTop5
@hackernewstop5.bsky.social
about 3 hours ago
Paper2Agent: Stanford Reimagining Research Papers as Interactive AI Agents #HackerNews arxiv.org/abs/2509.06917
Paper2Agent: Stanford Reimagining Research Papers as Interactive AI Agents
0
0
0
HN
@hnws.bsky.social
about 3 hours ago
Paper2Agent: Stanford Reimagining Research Papers as Interactive AI Agents L: arxiv.org/abs/2509.06917 C: news.ycombinator.com/item… posted on 2025.09.22 at 18:02:01 (c=0, p=3)
0
0
0
Juan Carlos Niebles
@jcniebles.bsky.social
about 3 hours ago
πŸ“’πŸ“’ Exciting news! Our paper, "Exploring Diffusion Transformer Designs via Grafting," has been accepted as an Oral at #NeurIPS2025, with only 77 out of 21k submissions receiving this honor. πŸ“„Paper: arxiv.org/abs/2506.05340 🌎Website: grafting.stanford.edu πŸ§‘πŸ»β€πŸ’»Code: github.com/keshik6/graf...

grafting.stanford.edu

Exploring Diffusion Transformer Designs via Grafting

Exploring Diffusion Transformer Designs via Grafting

0
0
2