- What's Happening
- Posts
- The AI Triple Play
The AI Triple Play
Highlighting One Tool, One Event, and One Resource Shaping the AI Landscape
🧠 Resource
AI is moving at a record pace and it’s overwhelming to keep up (don’t know how your X feed looks like, but mine is just too much). That’s why I appreciate high-signal walkthroughs like this: Andrej does a whirlwind tour of LLMs in one hour. If you speed through this, he has more tutorials like this that are longer but still worth the watch if you need that one video that will make you sound like you know what you’re talking about when speaking to your friends or colleagues.
If you haven’t heard about Andrej so far, he has a pretty good CV: ran first deep learning course at Stanford, was the founding member of OpenAI research, had a stint at Tesla running their computer vision program, and is now focusing on educating the masses on AI at Eureka Labs. Keep up the good work, we need you to explain to the rest of us what the hell is happening.
Most recently, Andrej is known for coining the term “vibe coding”, relating to the practice of hobbyists making apps almost fully created by AI, just by using prompts/talking to LLMs.
The practice is now used by professionals and amateurs alike (yours truly as well) and generally seen as a positive, although there are many growing concerns on the AI slop and technical debt that might be coming out of all these endeavors.
This brings me to the next topic of order 👇
🎫 Event
Pieter Levels, known on X and other places for indie hacking, is leading the trend on vibe coding as well. He’s organizing a Vibe Coding Game Jam where anyone can apply with their own game. Basically the only constraint is the game needs to be coded by AI at least 80%. I’ve entered and hope you do too—it’s a great opportunity to see other creative projects and learn something new (follow #vibejam on X).
I'm organizing the
🌟 2025 Vibe Coding Game Jam
Deadline to enter: 25 March 2025, so you have 7 days
- anyone can enter with their game
- at least 80% code has to be written by AI
- game has to be accessible on web without any login or signup and free-to-play (preferrably its— @levelsio (@levelsio)
3:43 PM • Mar 17, 2025
🧰 Toolbox
I admit, I heard about NotebookLM some time ago but have been sleeping on it and other Google LLM stuff ever since. Yesterday I stumbled upon it again, after trying to find a tool that would be something like having a conversation with a document. The issue I was having is now that I’m vibecoding a lot, I get introduced to a lot of new concepts I don’t understand and I’d like to, so I’d want to be able to highlight parts of the LLM conversation (or e.g. a book) and have separate threads on the topic, but easily tie it back to the original conversation.
I haven’t found that in NotebookLM, but boy is it impressive nonetheless! Here are some highlights for me:
you can add almost all types of resources to its context and it creates and interactive notebook and chat with you
it can generate a 2-host podcast on the sources you fed it (and that is not all, you can actually INTERACT with the podcast, asking it questions and it changes it according to that)
huge (maybe largest?) context window, meaning you don’t have to keep reopening chats and reiterating what it already learned in the previous chat
The biggest drawback for me is (and I hope it will be easy to add in and soon to come) that llm-like chat with deepsearch is not included, which means you have to manually add in resources you want to work with, instead of asking it a question for it to find resources which you can then “feed in” to work with. The choice makes sense given the direction of the product (more research focused), but I think there’s a huge case for this being available, similar to how canvas is now available in almost all LLM interfaces.
This video from Tiago Forte helped me get sped up on the features:
Bonus

Have you tried asking your colleagues for feedback like this?
I love Grok and think it has the best LLM interface out of all the commercial ones currently in my opinion. It seems they’re testing in production again, asking for feedback in this interesting way which I haven’t seen so far.
Anything you’d add/change/remove from this post? Give me your feedback, this will help make WTF Is Happening better.