Contact us
Locations
United States of America
- deepsense.ai, Inc.
- 2100 Geng Road, Suite 210
- Palo Alto, CA 94303
- United States of America
Poland
- deepsense.ai Sp. z o.o.
- al. Jerozolimskie 44
- 00-024 Warsaw
- Poland
- ul. Łęczycka 59
- 85-737 Bydgoszcz
- Poland
Let us know how we can help
- Our service offerings
- contact@deepsense.ai
- Media relations
- media@deepsense.ai
Diffusion models in practice. Part 3: Portrait generation analysis
/in Generative AI /by Jarosław Kochanowicz, Dawid Stachowiak, Jan Woś, Maciej Domagała and Dawid ŻywczakIn this blog post, we empirically investigated portrait generation using diffusion models. Imitating human evaluation, we objectively measured aspects usually left to subjective manual checks and arbitrary decisions.
How we developed a GPT‑based solution for extracting knowledge from documents
/in Generative AI /by Piotr GródekIn this blogpost we will discuss our latest GPT-based solution addressing the challenge of extracting knowledge from a set of PDF documents.
OpenAI LLM APIs: OpenAI or Microsoft Azure?
/in Generative AI /by Patryk WyżgowskiIn this article, we share our insights related to two main ways of accessing the OpenAI models, both directly from the organization’s API and via Microsoft Azure OpenAI Service.
How we developed a GPT‑based solution for extracting knowledge from documents
/in Generative AI /by Piotr GródekIn this blogpost we will discuss our latest GPT-based solution addressing the challenge of extracting knowledge from a set of PDF documents.
Diffusion models in practice. Part 2: How good is your model?
/in Generative AI /by Jarosław Kochanowicz, Maciej Domagała, Dawid Stachowiak and Dawid ŻywczakThis is the second post in our series “Diffusion models in practice”. In this article, we start our journey into the practical aspects of diffusion modeling, which we found even more exciting. First, we would like to address a fundamental question that arises when one begins to venture into the realm of generative models: Where to start?
How to train a Large Language Model using limited hardware?
/in Generative AI /by Alicja KotylaLarge language models (LLMs) are yielding remarkable results for many NLP tasks, but training them is challenging due to the demand for a lot of GPU memory and extended training time. To address these challenges, various parallelism paradigms have been developed, along with memory-saving techniques to enable the effective training of LLMs. In this article, we will describe these methods.
Data generation with diffusion models – part 1
/in Generative AI /by Natalia CzerepIt is widely known that computer vision models require large amounts of data to perform well. Unfortunately, in many business cases we are left with a small amount of data. There are several approaches to overcoming the issue of insufficient data, one of which is supplementing the available dataset with new images, which is discussed in this article.
Diffusion models in practice. Part 1: A primers
/in Generative AI /by Jarosław Kochanowicz, Maciej Domagała, Dawid Stachowiak and Krzysztof DziedzicThe AI revolution continues, and there is no indication of it nearing the finish line. The last year has brought astonishing developments in two critical areas of generative modeling: large language models and diffusion models.
Report: The diverse landscape of large language models. From the original Transformer to GPT-4 and beyond
/in Generative AI /by Artur ZygadloThis report is an attempt to explain and summarize the diverse landscape of LLMs in early 2023.
ChatGPT – what is the buzz all about?
/in Generative AI /by Eryk Mazuś and Maciej DomagałaOver the last few months, ChatGPT has generated a great deal of excitement. Some have gone as far as to suggest it is a giant step in developing AI that will overtake humanity in many important areas, both in business and social life. Others view it more as a distraction on the path towards achieving human-level intelligence. How did ChatGPT generate such hype? In this article, we’ll try to explain.