Thinktecture Logo
Thinktecture Symbol weiss

Thinktecture Labs

Technical (mostly work-in-progress) posts from the Thinktecture research labs.
Latest findings, learnings, insights, code snippets. From brain to fingers.

OpenAI TTS – The Missing Piece of the Puzzle

Marco Frodl
Marco Frodl • 20.11.2023 • Category: AI

Let’s take a closer look at OpenAI’s Text-to-Speech (TTS) API as a key addition to its Generative AI ecosystem. This new API offers several useful features such as different voice options, adjustable output quality, and support for multiple languages. Unique features include customizable speaking speed and emphasis techniques.

Read more →

Improved RAG: More effective Semantic Search with content transformations

Sebastian Gingter • 07.11.2023 • Category: AI

One of the more pragmatic ways to jump on the current AI hype and get some value out of it is to use semantic search.

Semantic search in itself is a very simple concept: you have a bunch of documents and you want to find the most similar ones to a given query.

While the technology behind that is quite complex and very mathematical, it is relatively easy to use…

Read more →

Securing Colab Notebooks – Protecting Your OpenAI API Keys

Marco Frodl
Marco Frodl • 04.09.2023 • Category: AI

Today, I have a very important topic to address – handling security concerns in Google Colab notebooks, specifically when dealing with secret keys like OpenAI API keys.

Read more →

OpenAI Function Calling with Azure OpenAI

Thorsten Hans • 01.09.2023 • Category: AI, Azure OpenAI, OpenAI

Support for OpenAI Function Calling was added to Azure OpenAI a couple of weeks ago and can be used for new deployments with the latest gpt-35-turbo and gpt-4 models. In this short article, I’ll guide you through building a simple app that leverages generative AI powered by Azure OpenAI and integrates with 3rd party API endpoints and local functionality using OpenAI Function Calling.

All source code shown in this…

Read more →

Using (Azure) Open AI Models with Semantic Kernel behind a reverse proxy

Sebastian Gingter • 27.07.2023 • Category: AI, Azure OpenAI, OpenAI, Semantic Kernel

Do you want to use the powerful AI models from OpenAI or Azure OpenAI in your web applications, but don’t want to expose your API keys to the client? In this article, you will learn how to set up a simple proxy using Yarp, a reverse proxy library for .NET, that will add the API keys on the server side and forward the requests to the AI service. You will…

Read more →

Create Semantic Kernel code & skills to build AI-powered apps with .NET

Sebastian Gingter • 23.07.2023 • Category: AI, Semantic Kernel

Semantic Kernel is a library from Microsoft that can be used to add AI features to your applications. One of the easiest things we can do with Semantic Kernel is to create a chatbot that mimics chat GPT. The chatbot can be implemented as a console application that uses the .NET infrastructure for configuration, user secrets, and dependency injection (DI). The article provides code examples for how to implement the…

Read more →

Run your GPT-4 securely in Azure using Azure OpenAI Service

Thorsten Hans • 19.07.2023 • Category: AI, Azure OpenAI, OpenAI

Running the latest GPT-4 models in a controlled, secure, and governed environment is mission-critical when we think about real-world scenarios where we want to leverage the capabilities of OpenAI but hide our service instance from the public internet.

This post demonstrates how to deploy the Azure OpenAI service and a GPT-4 model into a private network infrastructure using an Azure Private Endpoint, individual private DNS records, and restrict access to…

Read more →

Efficient AI Workflows: How to Seamlessly Add Support for New Large Language Models using LangChain

Marco Frodl
Marco Frodl • 08.07.2023 • Category: AI

LangChain, a powerful framework for AI workflows, demonstrates its potential in integrating the Falcon 7B large language model into the privateGPT project. Despite initial compatibility issues, LangChain not only resolves these but also enhances capabilities and expands library support. It allows swift integration of new models with minimal adjustments, reducing development time and providing extensive customization options. The framework’s flexibility proves fundamental to future-proofing AI projects, adapting seamlessly to technological…

Read more →

Wire-debugging Semantic Kernel code talking to OpenAI (and other) LLM APIs through HTTPS

Christian Weyer • 04.07.2023 • Category: AI, Semantic Kernel

Finding issues when using Semantic Kernel to talk to OpenAI or similarly hosted LLMs.

Read more →

ChatDocs – the most sophisticated way to chat via AI with your documents locally

Marco Frodl
Marco Frodl • 26.06.2023 • Category: AI

ChatDocs is an innovative Local-GPT project that allows interactive chats with personal documents. It features an integrated web server and support for many Large Language Models via the CTransformers library. Although not aimed at commercial speeds, it provides a versatile environment for AI enthusiasts to explore different LLMs privately.

Read more →