1. Bluesky Feeds /
  2. David Colarusso /
  3. AI Papers et al.

A non-exhaustive feed showing posts that look like they link to an AI/ML paper.

Feed on Bluesky

Feeds Stats

  • 💙 Liked by 7 users
  • 📅 Updated 7 months ago
  • ⚙️ Provider skyfeed.me

AI Papers et al. Likes over time

Like count prediction
The feed AI Papers et al. has not gained any likes in the last month.

Feed Preview for AI Papers et al.

Rafe Meager (they/them)
@economeager.bsky.social
about 15 hours ago
Today let us read Baydin et al (2018) Automatic Differentiation in Machine Learning: A Survey arxiv.org/pdf/1502.05767
I have screenshotted the first three paragraphs of the introduction. It reads:

Methods for the computation of derivatives in computer programs can be classified into four
categories: (1) manually working out derivatives and coding them; (2) numerical differentiation using finite difference approximations; (3) symbolic differentiation using expression
manipulation in computer algebra systems such as Mathematica, Maxima, and Maple; and
(4) automatic differentiation, also called algorithmic differentiation, which is the subject
matter of this paper.
Conventionally, many methods in machine learning have required the evaluation of
derivatives and most of the traditional learning algorithms have relied on the computation of gradients and Hessians of an objective function (Sra et al., 2011). When introducing
new models, machine learning researchers have spent considerable effort on the manual
derivation of analytical derivatives to subsequently plug these into standard optimization
procedures such as L-BFGS (Zhu et al., 1997) or stochastic gradient descent (Bottou, 1998).
Manual differentiation is time consuming and prone to error. Of the other alternatives, numerical differentiation is simple to implement but can be highly inaccurate due to round-off
and truncation errors (Jerrell, 1997); more importantly, it scales poorly for gradients, rendering it inappropriate for machine learning where gradients with respect to millions of
parameters are commonly needed. Symbolic differentiation addresses the weaknesses of
both the manual and numerical methods, but often results in complex and cryptic expressions plagued with the problem of “expression swell” (Corliss, 1988). Furthermore, manual
and symbolic methods require models to be defined as closed-form expressions, ruling out
or severely limiting algorithmic control flow and expressivity.
We are concerned with the powerful fourth technique, automatic differentiation (AD).
AD performs a non-standard interpretation of a given c…
1
4
18
Stephen Turner
@stephenturner.us
about 8 hours ago
What Does 'Human-Centred AI' Mean? arxiv.org/abs/2507.19960 interesting read from @olivia.science
1
3
7
AI Firehose
@ai-firehose.column.social
about 3 hours ago
Researchers unveil SuperCoder, a groundbreaking LLM that outperforms industry-standard compilers in assembly program optimization, achieving a 95% correctness rate and a 1.46× speedup. This approach could redefine program optimization beyond traditional techniques. arxiv.org/abs/2505.11480
SuperCoder: Assembly Program Superoptimization with Large Language Models

arxiv.org

SuperCoder: Assembly Program Superoptimization with Large Language Models

ArXiv link for SuperCoder: Assembly Program Superoptimization with Large Language Models

0
0
2
Shayoni Lynn
@shayonislynn.bsky.social
about 3 hours ago
#AI powered #Misinformation campaigns have never been easier to develop and execute. How do small language models produce political messaging, and can these be automatically evaluated without human raters? This study finds two key behavioural insights.
AI Propaganda factories with language models

arxiv.org

AI Propaganda factories with language models

AI-powered influence operations can now be executed end-to-end on commodity hardware. We show that small language models produce coherent, persona-driven political messaging and can be evaluated autom...

0
0
1
Cameron D. Campbell 康文林
@camerondcampbell.blog
about 23 hours ago
Our student Yue (Bruce) Yu has been working on an improved pipeline for nominative record linkage in Chinese historical datasets that uses machine learning. This working paper describes the approach and compares it with our existing, somewhat ad hoc approach for the 缙绅录. osf.io/preprints/so...

osf.io

OSF

0
1
6
AI Firehose
@ai-firehose.column.social
about 4 hours ago
A study shows how AI can transform sustainable food systems by connecting molecular composition to taste and nutrition, aiming to accelerate food design innovation and fulfill the demand for healthier, eco-friendly options. arxiv.org/abs/2509.21556
AI for Sustainable Future Foods

arxiv.org

AI for Sustainable Future Foods

ArXiv link for AI for Sustainable Future Foods

0
0
1
Luciano Floridi
@floridi.bsky.social
1 day ago
A short, invited article (many thanks to the Spanish Robotic Association - AER)) to be published in INSIGHT'25, in connection with the 2025 Barcelona Declaration. "A Humanistic and Ethical Approach to Robotics, Automation, and Digitalisation" papers.ssrn.com/sol3...
0
2
5
Yonghan Jung
@yonghanjung.bsky.social
about 7 hours ago
Thrilled to share our new paper! 📄 Paper: arxiv.org/abs/2509.22531 💻 Code: github.com/yonghanjung/... We develop the first orthogonal ML estimators for heterogeneous treatment effects (HTE) under front-door adjustment, enabling HTE identification even with unmeasured confounders.
Debiased Front-Door Learners for Heterogeneous Effects

arxiv.org

Debiased Front-Door Learners for Heterogeneous Effects

In observational settings where treatment and outcome share unmeasured confounders but an observed mediator remains unconfounded, the front-door (FD) adjustment identifies causal effects through the m...

0
0
1
arXiv cond-mat.dis-nn Disordered Systems and Neural Networks
@condmatdisnn-bot.bsky.social
about 17 hours ago
Arsham Ghavasieh, Meritxell Vila-Minana, Akanksha Khurd, John Beggs, Gerardo Ortiz, Santo Fortunato: Toward a Physics of Deep Learning and Brains arxiv.org/abs/2509.22649 arxiv.org/pdf/2509.22649 arxiv.org/html/2509.22649
0
6
2
Philipp Leitner
@philippleitner.net
1 day ago
New paper accepted by Huaifeng Zhang, Mohannad Alhanahnah, YT, and Ahmed Ali El Din: BLAFS: A Bloat-Aware Container File System (accepted at the ACM Symposium on Cloud Computing) Preprint: arxiv.org/abs/2305.04641 Tool: github.com/negativa-ai/... Congratulations to Huaifeng and the team!
The Cure is in the Cause: A Filesystem for Container Debloating

arxiv.org

The Cure is in the Cause: A Filesystem for Container Debloating

Containers have become a standard for deploying applications due to their convenience, but they often suffer from significant software bloat-unused files that inflate image sizes, increase provisionin...

0
0
3
Stephen T. Cobb
@scobb.net
1 day ago
If you read just one paper about AI and existential risk this weekend, let it be this pioneering analysis from @atoosakz.bsky.social. Comes closer to addressing the inherent vulnerability of AI technology than anything I've found so far. Brilliant work! #AI #AIRisk #TechRisk #AIEthics #XRisk
Two Types of AI Existential Risk: Decisive and Accumulative

arxiv.org

Two Types of AI Existential Risk: Decisive and Accumulative

The conventional discourse on existential risks (x-risks) from AI typically focuses on abrupt, dire events caused by advanced AI systems, particularly those that might achieve or surpass human-level i...

1
2
3
Shubhendu Trivedi
@shubhendu.bsky.social
2 days ago
I was such a huge fan of this line of work led by David Balduzzi arxiv.org/abs/2001.04678 A lot of it now again relevant for even problems like orchestrating several LLMs to serve in a "marketplace," orchestrating agents, and so on.
Smooth markets: A basic mechanism for organizing gradient-based learners

arxiv.org

Smooth markets: A basic mechanism for organizing gradient-based learners

With the success of modern machine learning, it is becoming increasingly important to understand and control how learning algorithms interact. Unfortunately, negative results from game theory show the...

1
0
4
arxiv cs.CL
@arxiv-cs-cl.bsky.social
about 15 hours ago
Yudong Li, Yufei Sun, Yuhan Yao, Peiru Yang, Wanyue Li, Jiajun Zou, Yongfeng Huang, Linlin Shen RedNote-Vibe: A Dataset for Capturing Temporal Dynamics of AI-Generated Text in Social Media arxiv.org/abs/2509.22055
0
0
1
AI Firehose
@ai-firehose.column.social
about 16 hours ago
A study reveals a method to prevent model collapse in overparameterized linear regression by optimizing real to synthetic data ratios. The optimal proportion aligns with the golden ratio’s reciprocal, stabilizing predictions in generative AI models. arxiv.org/abs/2509.22341
Preventing Model Collapse Under Overparametrization: Optimal Mixing Ratios for Interpolation Learning and Ridge Regression

arxiv.org

Preventing Model Collapse Under Overparametrization: Optimal Mixing Ratios for Interpolation Learning and Ridge Regression

ArXiv link for Preventing Model Collapse Under Overparametrization: Optimal Mixing Ratios for Interpolation Learning and Ridge Regression

0
0
1
AI Firehose
@ai-firehose.column.social
about 16 hours ago
This study reveals a smoothed vector quantization method that improves codebook utilization and prevents collapse, enhancing performance in autoencoding and contrastive learning. This method exceeds existing techniques, tackling a core issue in machine learning. arxiv.org/abs/2509.22161
Pushing Toward the Simplex Vertices: A Simple Remedy for Code Collapse in Smoothed Vector Quantization

arxiv.org

Pushing Toward the Simplex Vertices: A Simple Remedy for Code Collapse in Smoothed Vector Quantization

ArXiv link for Pushing Toward the Simplex Vertices: A Simple Remedy for Code Collapse in Smoothed Vector Quantization

0
0
1
François Guité
@francoisguite.bsky.social
1 day ago
Researchers uncover hidden ingredients behind AI creativity | LiveScience www.livescience.com/techn…. An analytic theory of creativity in convolutional diffusion models | arXiv arxiv.org/abs/2412.20292 #openaccess
0
0
2
AI Firehose
@ai-firehose.column.social
about 16 hours ago
The game-theoretic framework for federated learning uncovers complexities of varied data contributions and addresses incentive misalignment. A strategyproof payment mechanism resolves the free-rider dilemma, reshaping collaborative AI training. arxiv.org/abs/2509.21612
Incentives in Federated Learning with Heterogeneous Agents

arxiv.org

Incentives in Federated Learning with Heterogeneous Agents

ArXiv link for Incentives in Federated Learning with Heterogeneous Agents

0
0
1
AI Firehose
@ai-firehose.column.social
about 17 hours ago
Research presents a game-theoretic approach to federated learning that tackles incentive misalignments among agents, showing cooperation boosts efficiency and reduces free-riding. A contribution payment mechanism supports equitable collaboration in AI. arxiv.org/abs/2509.21612
Incentives in Federated Learning with Heterogeneous Agents

arxiv.org

Incentives in Federated Learning with Heterogeneous Agents

ArXiv link for Incentives in Federated Learning with Heterogeneous Agents

0
0
1
arXiv cs.RO Robotics
@csro-bot.bsky.social
about 17 hours ago
Alberto Olivares-Alarcos, Sergi Foix, J\'ulia Borr\`as, Gerard Canal, Guillem Aleny\`a: Ontological foundations for contrastive explanatory narration of robot plans arxiv.org/abs/2509.22493 arxiv.org/pdf/2509.22493 arxiv.org/html/2509.22493
0
3
1
arXiv cs.LG Machine Learning
@cslg-bot.bsky.social
about 17 hours ago
Amine Bechar, Adel Oulefki, Abbes Amira, Fatih Kurogollu, Yassine Himeur: Extracting Actionable Insights from Building Energy Data using Vision LLMs on Wavelet and 3D Recurrence Representations arxiv.org/abs/2509.21934 arxiv.org/pdf/2509.21934 arxiv.org/html/2509.21934
0
1
1
arXiv cs.CL Computation and Language
@cscl-bot.bsky.social
about 17 hours ago
Tanise Ceron, Dmitry Nikolaev, Dominik Stammbach, Debora Nozza: What Is The Political Content in LLMs' Pre- and Post-Training Data? arxiv.org/abs/2509.22367 arxiv.org/pdf/2509.22367 arxiv.org/html/2509.22367
0
3
1