Joas Pambou's profile photo

Joas Pambou

Articles

  • 2 weeks ago | smashingmagazine.com | Joas Pambou

    11 min readAnimation,Python,ToolsShare on Twitter, LinkedInAnimation makes things clearer, especially for designers and front-end developers working on UI, prototypes, or interactive visuals. Manim is a tool that lets you create smooth and dynamic animations, not just for the design field but also in math, coding, and beyond, to explain complex ideas or simply make everything a little bit more interactive.

  • Jan 16, 2025 | smashingmagazine.com | Joas Pambou

    11 min readAI,Tools,LLM,Techniques,AppsShare on Twitter, LinkedInShouldn’t there be a way to keep your apps or project data private and improve performance by reducing server latency? This is what on-device AI is designed to solve. It handles AI processing locally, right on your device, without connecting to the internet and sending data to the cloud.

  • Oct 10, 2024 | smashingmagazine.com | Joas Pambou

    14 min readAI,LLM,Techniques,ToolsShare on Twitter, LinkedInYou’ve covered a lot with Joas Pambou so far in this series. In Part 1, you built a system using a vision-language model (VLM) and a text-to-speech (TTS) model to create audio descriptions of images. In Part 2, you improved the system by using LLaVA and Whisper, which provided audio descriptions of images.

  • Aug 30, 2024 | smashingmagazine.com | Joas Pambou |Generate Chatbot Response."

    15 min readAI,Tools,TechniquesShare on Twitter, LinkedInIn the second part of this series, Joas Pambou aims to build a more advanced version of the previous application that performs conversational analyses on images or videos, much like a chatbot assistant. This means you can ask and learn more about your input content. Joas also explores multimodal or any-to-any models that handle images, videos, text, and audio, offering a comprehensive view of cutting-edge AI applications.

  • Jul 24, 2024 | smashingmagazine.com | Joas Pambou

    18 min readAI,Tools,TechniquesShare on Twitter, LinkedInJoas Pambou built an app that integrates vision language models (VLMs) and text-to-speech (TTS) AI technologies to describe images audibly with speech. This audio description tool can be a big help for people with sight challenges to understand what’s in an image. But how this does it even work? Joas explains how these AI systems work and their potential uses, including how he built the app and ways to further improve it.

Contact details

Socials & Sites

Try JournoFinder For Free

Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.

Start Your 7-Day Free Trial →