Sathish Gangichetty's profile photo

Sathish Gangichetty

Featured in: Favicon medium.com Favicon databricks.com

Articles

  • Jul 12, 2024 | dsmonk.medium.com | Sathish Gangichetty

    Ok, so far we haven't even introduced DSPy. For the uninitiated, it is a prompt optimization framework that offers a nice way to algorithmically optimize language model weights and prompts. It provides ways for us to control, guide and systematically optimize language model prompts. We will not go all the way into optimizing prompts in this blog post (because I don't have any ground truth data, nor do I think it is encompassed inside the scope of this post).

  • Apr 9, 2024 | dsmonk.medium.com | Sathish Gangichetty

    In the previous post, we delved into the emerging landscape of AI assistance in retail, highlighting the transformative potential of LLaVa. Building on this momentum, we now embark on a journey to harness the power of cutting-edge AI for a specific, groundbreaking application: creating an end-to-end Visual Q&A System.

  • Jan 16, 2024 | databricks.com | David Wells |Avinash Sooriyarachchi |Sathish Gangichetty |Aemro Amare

    Cyber threats and the tools to combat them have become more sophisticated. SIEM is over 20 years old and has evolved significantly in that time. Initially reliant on pattern-matching and threshold-based rules, SIEMs have advanced their analytic abilities to tackle more sophisticated cyber threats. This evolution, termed the 'Detection Maturity Curve,' illustrates the shift security operations have taken from simple alert systems to advanced mechanisms capable of predictive threat analysis.

  • Sep 1, 2023 | dsmonk.medium.com | Sathish Gangichetty

    What if we start from a quantized model? If you're interested in taking a base Large Language model (LLM) and fine-tuning it using QLoRA and then quantizing your model for serving, check out Part1. Instead, if you want to start from a GPTQ quantized model such as the llama-2-7b-gptq, and fine-tune it using LoRA, read on. GPTQ is a post training quantization technique that adopts a mixed int4/fp16 quantization scheme where weights are quantized as int4 while activations remain in float16.

  • Aug 30, 2023 | dsmonk.medium.com | Sathish Gangichetty

    Perhaps, you can have the cake and eat it too! The last few months have been a whirlwind of innovation in the Open Source LLM landscape. It's dizzingly hard to keep track of everything that's new. Of the couple of things that stick out, we saw Llama2 drop first and then code-llama, and then the variants which are reported to top GPT-4 in HumanEval.

Contact details

Socials & Sites

Try JournoFinder For Free

Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.

Start Your 7-Day Free Trial →