Contact us
Locations
United States of America
- deepsense.ai, Inc.
- 2100 Geng Road, Suite 210
- Palo Alto, CA 94303
- United States of America
Poland
- deepsense.ai Sp. z o.o.
- al. Jerozolimskie 44
- 00-024 Warsaw
- Poland
- ul. Łęczycka 59
- 85-737 Bydgoszcz
- Poland
Let us know how we can help
- Our service offerings
- contact@deepsense.ai
- Media relations
- media@deepsense.ai
Generative AI developer toolkit
/in Generative AI /by Paweł KmiecikA thrilling adventure in the world of next-gen programming awaits, powered not by replacing humans with AI, but by using AI to enhance human potential. In this blog post we will discuss the most interesting and powerful GenAI tools that you should know more about.
Operationalizing Large Language Models: How LLMOps can help your LLM-based applications succeed
/in Generative AI /by Mateusz KwaśniakIn this blog post we will discuss the importance of LLMOps principles and best practices, which will enable you to take your existing or new machine learning projects to the next level.
How to efficiently implement LLMs in your business operations
/in Generative AI /by deepsense.aiA comprehensive guide to incorporating Large Language Models into your company for increased efficiency and business value.
Data generation with diffusion models – part 2
/in Generative AI /by Natalia CzerepOne of the most challenging tasks in data generation with diffusion models is generating labels intended for semantic segmentation. At deepsense.ai, we have embraced the challenge of devising a novel approach that simultaneously generates images complete with precise segmentation masks. We are sharing the results of our work in this blog post.
Diffusion models in practice. Part 3: Portrait generation analysis
/in Generative AI /by Jarosław Kochanowicz, Dawid Stachowiak, Jan Woś, Maciej Domagała and Dawid ŻywczakIn this blog post, we empirically investigated portrait generation using diffusion models. Imitating human evaluation, we objectively measured aspects usually left to subjective manual checks and arbitrary decisions.
How we developed a GPT‑based solution for extracting knowledge from documents
/in Generative AI /by Piotr GródekIn this blogpost we will discuss our latest GPT-based solution addressing the challenge of extracting knowledge from a set of PDF documents.
OpenAI LLM APIs: OpenAI or Microsoft Azure?
/in Generative AI /by Patryk WyżgowskiIn this article, we share our insights related to two main ways of accessing the OpenAI models, both directly from the organization’s API and via Microsoft Azure OpenAI Service.
How we developed a GPT‑based solution for extracting knowledge from documents
/in Generative AI /by Piotr GródekIn this blogpost we will discuss our latest GPT-based solution addressing the challenge of extracting knowledge from a set of PDF documents.
Diffusion models in practice. Part 2: How good is your model?
/in Generative AI /by Jarosław Kochanowicz, Maciej Domagała, Dawid Stachowiak and Dawid ŻywczakThis is the second post in our series “Diffusion models in practice”. In this article, we start our journey into the practical aspects of diffusion modeling, which we found even more exciting. First, we would like to address a fundamental question that arises when one begins to venture into the realm of generative models: Where to start?
How to train a Large Language Model using limited hardware?
/in Generative AI /by Alicja KotylaLarge language models (LLMs) are yielding remarkable results for many NLP tasks, but training them is challenging due to the demand for a lot of GPU memory and extended training time. To address these challenges, various parallelism paradigms have been developed, along with memory-saving techniques to enable the effective training of LLMs. In this article, we will describe these methods.