GPU batch job queue with AWS spot instances | glowing telegram - Episode 136
In today's video, I focus on refining and enhancing our Rust/TypeScript web app, "Glowing Telegram," by transitioning specific parts to a serverless architecture with Pulumi and Python. This episode addresses the deployment of a GPU batch job queue—a crucial step in optimizing our application's performance on AWS using spot instances for cost-effectiveness.
We dive deeply into job definitions and container properties, investigating how to manage environment variables and connect to ECS roles. This session stands out by troubleshooting roles and compute environments, which is essential to improve our deployment strategy. By the end of the stream, we've successfully configured a managed compute environment and laid the groundwork for automatic job triggering using AWS services.
This detailed exploration brings us closer to a seamless transition to serverless, bridging the functions and operations of our backend with an evolving new UI for Glowing Telegram that we're setting up for future integration.
🔗 Check out my Twitch channel for more streams: https://www.twitch.tv/saebyn
GitHub: https://github.com/saebyn/glowing-telegram
Discord: https://discord.gg/N7xfy7PyHs