VJ
VJAI Paper Hub
HomeSeeds
Seed Pool Β· Nominations Open

Cycle Seeds

Each cycle has its own seed pool β€” papers nominated by the community for that sprint. Vote for your favourites or nominate a new one.

1
Active Cycles
0
Nominations
0
Total Votes Cast
6
Cycles Planned
Propose a Seed Paper
Active β€” Nominations Open

May 2026

🎯 ICLR 2026 Highlights0 nominationsDeep Dive: TBD
🌱 Organizer Suggestions β€” not yet nominated
LLMRLReasoning
🌱 Seed

DeepSeek-R1: Incentivizing Reasoning via Reinforcement Learning

Suggested by VJAI Core Team

Trains LLMs to reason purely via RL without SFT cold start, achieving o1-level performance.

World ModelsSelf-SupervisedVision
🌱 Seed

JEPA: Self-Supervised Learning via Joint-Embedding Predictive Architecture

Suggested by VJAI Core Team

LeCun's proposed architecture for world models that reasons in abstract representation space.

SystemsEfficiencyHardware
🌱 Seed

FlashAttention-3: Fast Attention for H100 GPUs

Suggested by VJAI Core Team

Leverages H100 hardware features (TMA, WGMMA) to push attention throughput to near-theoretical limits.

ScalingLLMFoundations
🌱 Seed

Scaling Laws for Neural Language Models

Suggested by VJAI Core Team

Empirical laws relating model performance to compute, data, and parameters β€” the blueprint for GPT-4.

RoboticsMultimodalVLA
🌱 Seed

RT-2: Vision-Language-Action Models

Suggested by VJAI Core Team

Co-fine-tunes a VLM on robot data so the same model reasons about scenes and outputs robot actions.

RLWorld ModelsPlanning
🌱 Seed

DreamerV3: Mastering Diverse Domains with World Models

Suggested by VJAI Core Team

A single algorithm that masters Atari, continuous control, and Minecraft from scratch using a learned world model.

VisionSegmentationFoundation Models
🌱 Seed

Segment Anything Model 2 (SAM 2)

Suggested by VJAI Core Team

Extends SAM to video, enabling promptable, real-time segmentation of any object in any video.

AlignmentRLHFSafety
🌱 Seed

Constitutional AI: Harmlessness from AI Feedback

Suggested by VJAI Core Team

Trains harmless AI using AI-generated feedback rather than human labeling, using a Constitutional set of principles.

AgentsLLMEmbodied AI
🌱 Seed

Voyager: An Open-Ended Embodied Agent with LLMs

Suggested by VJAI Core Team

Lifelong learning agent in Minecraft that writes its own code to solve tasks, building a skill library over time.

AgentsLLMTool Use
🌱 Seed

Toolformer: Language Models That Can Use Tools

Suggested by VJAI Core Team

Self-supervised method to teach LLMs when and how to call external APIs (calculator, search, calendar).

InterpretabilitySafetyTheory
🌱 Seed

Mechanistic Interpretability of Neural Networks

Suggested by VJAI Core Team

Reverse-engineering neural networks to understand circuits, features, and internal representations.

RLHFAlignmentLLM
🌱 Seed

RLHF: Training Language Models to Follow Instructions with Human Feedback

Suggested by VJAI Core Team

InstructGPT β€” the paper that made GPT-3 follow instructions by fine-tuning with PPO on human preferences.

LLMOpen SourcePretraining
🌱 Seed

LLaMA 3: Open Foundation and Fine-Tuned Chat Models

Suggested by VJAI Core Team

Meta's fully open LLaMA 3 family β€” from 8B to 405B, with details on pretraining data, architecture, and alignment.

LLMMultimodalLong Context
🌱 Seed

Gemini 1.5: Unlocking Multimodal Understanding Across Millions of Tokens

Suggested by VJAI Core Team

Google's Gemini 1.5 with a 1M token context window β€” how mixture-of-experts enables efficient long-context processing.

MoEScalingEfficiency
🌱 Seed

Mixture of Experts: Switch Transformer

Suggested by VJAI Core Team

Scales language models to trillion parameters with sparse Mixture-of-Experts, replacing dense FFN layers.

DiffusionImage GenerationGenerative Models
🌱 Seed

Stable Diffusion: High-Resolution Image Synthesis with Latent Diffusion

Suggested by VJAI Core Team

Moves diffusion process into a compressed latent space, enabling high-quality image synthesis on consumer hardware.

MultimodalVisionContrastive Learning
🌱 Seed

CLIP: Learning Transferable Visual Models From Natural Language Supervision

Suggested by Team Hanoi

Learns image-text representations from 400M pairs β€” zero-shot image classification rival to supervised baselines.

GNNGraphSurvey
🌱 Seed

Graph Neural Networks: A Review of Methods and Applications

Suggested by Team Saigon

Comprehensive survey of GNN methods β€” message passing, pooling, and applications in chemistry, social networks, and NLP.

Shape What We Read Next

Every nomination and vote directly influences the papers we deep-dive into. Join the community and make your voice count.