Dalbe AI Workshop February 2026: From ChatGPT to Google AI Studio in Just a Few Hours

Alpar Torok

It wasn't a workshop. It was a crash course.

After the big workshop we organized with the Visit Mureș team, we decided that at Dalbe we would hold two monthly workshops: one in Romanian and one in Hungarian. The first Romanian workshop took place on February 12, and the Hungarian one on February 19, which we will write about separately.

Four people confirmed for February 12. Two showed up. I don't know if the weather or the time of year was to blame, but the two who did show up made the evening completely worth it.

Ovidiu Baciu, site supervisor and construction project manager, and Claudiu Tăran, owner of JoySpa.ro - Wooden Hot Tubs, Jacuzzis & Premium Wellness. Both are members of the BNI Action chapter in Târgu Mureș. They came because they had heard feedback from their BNI colleagues that the Dalbe workshops are a treasure trove of AI-related information. That means more than any paid ad.

In the end, after 10:10 PM, Ovidiu and Claudiu told me their heads were spinning. That it wasn't a workshop, it was a crash course. And that they liked this format much more, which is exactly why they didn't even open their laptops; they just let me show them more and more things, practically for hours. At the next BNI meeting, they gave a testimonial about our evening: a crash course that is totally worth it.

Where we started: AI didn't start with ChatGPT

The first mistake many people make when they hear about AI is to equate it with ChatGPT. We wanted to clarify this from the very beginning, because if you don't understand the context, the rest of the conversation has no foundation.

The theory of artificial intelligence has been around for decades. Machine learning, neural networks, expert systems—all of these appeared long before 2022. What has changed dramatically in recent years is the emergence of LLMs (Large Language Models), and the real inflection point was the commercial success of OpenAI's ChatGPT.

We discussed the origins of the research teams behind these models and the original founding members of Google Brain in 2018. From there, we got to an important fact: some of the Google researchers left and founded Anthropic, the company behind Claude. This competition among the big players has accelerated the entire AI ecosystem at a pace few had anticipated.

What models exist and what is the difference between them

From context, we moved to practice: what models are available now, how they differ, and what they are suited for.

We talked about OpenAI ChatGPT with the 4o model, which has been extremely popular and proven to be the best for MyGPTs and automations, as well as the newest available models, version 5.2 Fast and Thinking. We also discussed Google Gemini, which has made a strong comeback with model 3, including the nanobanana and nanobanana pro image generation variants. At Dalbe, we use it daily as Google Workspace users, and my colleague Krisztina is a huge Gemini fan. After Google's Gemini and OpenAI's (Microsoft) Chat GPT, we naturally got to Anthropic Claude (Amazon), currently at versions 4.5 and 4.6, which is our development team's preferred tool.

One thing I made sure to emphasize: there is no universal "best" model. There are models better suited for specific tasks. Claude writes and codes exceptionally well. ChatGPT has the most mature ecosystem of integrations and MyGPTs. Gemini integrates natively into Google Workspace. The choice depends on what you want to do, not what is most popular. Deepseek, Manus, LLAMA, and GROK are also renowned software that shouldn't be forgotten.

Tokens, credits, and why this matters to you as an entrepreneur

We opened the ChatGPT interface and went through every UI element together. One of the concepts people ignore most often, but which has a direct impact on what you can do with an AI model, is tokens.

Almost all AI models use a token system to measure consumption, both at input and output. If you want to send a large document for analysis, you need a large input context window. If you want a long article or a detailed plan, you need a large output. Free plans, paid plans, and higher tiers are largely differentiated based on these token limits. It's worth asking an LLM about tokens and how they are used. An interesting prompt would be, "As a Visual tutor, explain AI tokens to me as a 12th grader."

Almost all LLMs work with credit or token systems. You have a limit on how much you can send in a conversation and a limit on how much you can receive. Free vs. paid, packages, caps. It's not just a technical detail. It's cost and capacity.

Understanding tokens means understanding how much it costs you, how far you can go, and how to plan your work.

Concretely: if you send a 50-page contract to a model with a small context window, the model will cut off the information or hallucinate. Not because it's stupid, but because it doesn't fit. It's important to know this before relying on a model for large documents.

The anatomy of a good prompt and the AHA moment that shifted perspective

From here, we dove into prompt engineering.

This was one of the most impactful moments of the evening. I explained the structure of an effective prompt, drawing from both the training materials published by OpenAI with the launch of their new models, and the two free courses launched by Google: AI Essentials and Prompt Engineering.

The essential elements of a good prompt are the role given to the model, the context of the situation, the specific task, the tone and target audience, and the format in which you want the answer. I drew a parallel with the SMART acronym from management: just as a SMART goal is specific, measurable, achievable, relevant, and time-bound, a good prompt follows a similar logic. Frameworks like COSTAR have been popular, but with the new models, the amount of relevant context you provide matters more than sticking to a fixed template.

But the moment that truly changed their perspective was something else. I gave the same example twice: once without a defined role for the model, and once with a specific and clear role. The difference in the quality of the answer was immediate and obvious. Claudiu said on the spot that he wanted a separate meeting dedicated exclusively to this topic for JoySpa. That means the information landed exactly where it needed to.

GDPR, personal data, and the risk many ignore

We opened the Settings section in ChatGPT and went through the memory and data protection options. The main message I wanted to convey: be careful what data you put into any AI system.

The EU-US Privacy Shield no longer exists in its original form. Data entered into AI platforms can be used to train models unless you explicitly opt out. Entering confidential client data or internal company data into an AI chat is a real business risk, not a theoretical concern.

I gave the example of Facebook apps from a few years ago that asked for access to personal data under the guise of fun games, and how much personal information ended up in opaque systems as a result. The logic is the same.

One-shot vs. multi-shot and why iteration beats the perfect prompt

A single well-written prompt is useful. A conversation with follow-ups, clarifications, and additional examples produces significantly better results. The more context you provide throughout the conversation, the better the model calibrates to what you need.

I gave concrete examples from their real-life situations: for Ovidiu, creating a construction project plan in the role of a project manager, with phases, responsibilities, and deadlines. For Claudiu, a marketing strategy for JoySpa in a structured table, with channels, objectives, and KPIs. Neither of them had seen before how quickly a serious working structure can be generated starting from a few well-written sentences.

JSON prompting, what it is, and why we got to it

I introduced the concept of JSON prompting, but not before explaining what JSON is and what XML was before it, to provide some historical context. The practical concept: if you want structured and repeatable output, you ask the model to respond in JSON format. This is useful for image templates, articles with a fixed structure, or data that needs to go directly into another system or workflow.

JSON prompt example

Below is a simple prompt example structured in JSON. The idea isn't to "program," but to force the AI to provide an organized answer, easy to reuse in strategy, documentation, or automation.

{
  "role": "Marketing strategist for a premium wellness brand in Romania",
  "context": {
    "business_name": "JoySpa",
    "target_audience": "Entrepreneurs and busy professionals, 30-55 years old, interested in relaxation and premium experiences",
    "location": "Târgu Mureș",
    "market_position": "Premium services, personalized experience, intimate atmosphere"
  },
  "task": "Create a content plan for the next 30 days",
  "objectives": [
    "Increase local brand awareness",
    "Generate online bookings",
    "Strengthen premium positioning"
  ],
  "constraints": {
    "tone": "Professional, warm, without exaggerations",
    "format": "Structured in a table",
    "include_columns": [
      "Week",
      "Content type",
      "Topic",
      "Main message",
      "Call to action"
    ]
  },
  "output_requirements": {
    "language": "English",
    "clarity": "No technical jargon",
    "max_length": "Maximum 800 words"
  }
}

This type of prompt reduces ambiguity and increases the chances that the result will be directly usable, without 3-4 rounds of corrections.

I showed how a JSON prompt can create a consistent image template, something we use in our content creation process at Dalbe.

JSON prompt example for Dalbe-style image generation

Below is a structured prompt example for generating a square image, with a frame, in Dalbe's colors (black, white, yellow). It can be adapted for social media posts, banners, or workshop visuals.

{
  "role": "Graphic designer specializing in minimalist and tech branding",
  "objective": "Create a square image to promote a Dalbe AI Workshop",
  "brand_identity": {
    "primary_colors": ["#000000", "#FFFFFF", "#FFD400"],
    "style": "Minimalist, modern, tech, clean",
    "feeling": "Professional, intelligent, clear",
    "avoid": ["aggressive gradients", "extra colors", "cartoon style"]
  },
  "composition": {
    "format": "1:1",
    "size": "1024x1024",
    "background": "Matte black or clean white",
    "frame": {
      "type": "Continuous thin frame",
      "color": "#FFD400",
      "thickness": "4-6px"
    },
    "layout": "Centered, breathable, plenty of negative space"
  },
  "visual_elements": {
    "main_focus": "Open laptop with a stylized AI interface",
    "secondary_elements": "Subtle geometric lines or discreet tech pattern",
    "lighting": "High contrast, high clarity"
  },
  "text_overlay": {
    "headline": "Dalbe AI Workshop",
    "subheadline": "Practical. Applied. No hype.",
    "font_style": "Modern sans-serif, bold for the headline",
    "text_color": "#FFFFFF or #000000 depending on background"
  },
  "technical_requirements": {
    "high_resolution": true,
    "sharp_edges": true,
    "clean_typography": true,
    "no_watermark": true
  }
}

This type of prompt maintains the brand's visual consistency and avoids chromatic or stylistic improvisations. The clear structure helps the AI generate images that closely match Dalbe's identity.

Image generation and meta-prompting: an AI writes prompts for another AI

I demonstrated image generation in ChatGPT, starting from a simple prompt, then moving to something more interesting: meta-prompting. I asked ChatGPT to write a prompt for Gemini, and vice versa, for the same type of image. The results were visibly different and more elaborate than prompts written directly from scratch.

This was the second big AHA moment of the evening. The idea that you can use an AI to write prompts for another AI is not intuitive at first glance, but it immediately becomes valuable once you see it in action. And yes, it works.

I showed some examples of generating images with faces and how deepfakes work, not to impress, but to show the stakes. My goal was not to give them homework, but to show them how many things are out there and what implications they have.

Alpar and ChatGPT generated Jennifer Lawrence

I went through my entire image generation process for articles: a simple reference prompt, a meta-generated prompt by another model, and a JSON prompt that creates a reusable template. A JSON prompt for images means you can maintain visual consistency across all images in the same article or project, without starting from scratch every time.

Deep thinking, Perplexity, and the story of my 2012 Ford

I demonstrated the deep thinking modes in ChatGPT, including the different modes available in the interface. From deep thinking, we naturally moved to Perplexity AI, the AI research tool I recommend when you need documented and sourced information.

And here I shared a personal story. When I bought my current 2012 Ford, I used Perplexity for a balanced research on cars available in the Târgu Mureș area. I analyzed maintenance costs, reliability, and user feedback for several models, and narrowed it down to three options. The Ford wasn't in first place, but it was in the top 3. Then came my real-world verification. It was a genuine helpful tool, not a decision replacement. That's what I wanted to convey: AI can make heavy research easier, but the judgment remains yours.

Hallucinations: where we are in 2026

The topic of hallucinations cannot be missing from a serious discussion about AI. When LLMs first appeared, the error rate was high and visible. We are now at over 80% accuracy across almost all major models, which is a significant improvement. But that doesn't mean you can blindly trust every answer.

The rule of thumb I apply: the more critical and factual the task, the more human verification you need. AI doesn't replace judgment; it accelerates it. The difference matters.

ChatGPT Projects, MyGPTs, and how I write an article

I presented the Projects section in ChatGPT, which allows organizing conversations by topic with their own instructions and dedicated source data. You can have multiple chats within the same project, all sharing the same context and baseline instructions.

I also opened Dalbe Article Writer GPT, a MyGPT built by us that follows my specific SEO article writing process. And from there, we went straight into a live demo of my writing process.

I am against auto-generated, substance-less articles. A useful article in 2026 is one that cannot be replaced by a five-second answer from any AI model. It needs substance, real experience, specific context. Otherwise, it helps no one and convinces no search engine that it's worth displaying.

The common mistake: copy-pasting from ChatGPT directly into the CMS

If you copy text from ChatGPT and paste it into a WordPress or Shopify editor, you will automatically import dirty HTML code, invisible spaces, duplicate tags, and broken formatting. The solution is simple: ask the model to deliver the article as clean, SEO-ready HTML, with correct headings and no junk. Exactly how this article is structured.

SEO, EEAT, and why I don't try to trick Google

I explained Google's EEAT principles: Experience, Expertise, Authoritativeness, Trustworthiness. Not as an abstract theory, but as a concrete way to think about the content you publish.

My rule is simple: an article must show what you know how to do, not just what you know how to say. The difference is obvious to anyone reading carefully, and Google has invested heavily in distinguishing between the two.

I explained the importance of metadata, how to write an SEO title and meta description that work together with the content, and why ALT tags for images are not optional. We also talked about the audience: you have to know who you are writing for and write at their level. You don't use complex technical phrases for agricultural entrepreneurs, and you don't explain code to people with no technical background.

And I emphasized something important: instead of spending energy trying to trick Google, it's better to write articles for people. The energy spent is the same, but the results are completely different.

The Claudiu moment: after this explanation, he told me he wants a separate meeting for JoySpa dedicated exclusively to SEO and EEAT. That confirms to me that the information reached its target.

Google Gemini, AI Studio, and NotebookLM

We moved on to the Google ecosystem. I showed Gemini's Study mode and a few Google AI experiments, then I opened Google AI Studio, Google's development playground.

AI Studio is a remarkable tool: it allows you to build functional apps with just a few prompts, without being a developer. We live-built a small, functional app. Ovidiu lit up immediately and started thinking about how he could build a checklist system for his construction sites. This is exactly the type of reaction I look for at these workshops: not to impress, but to spark concrete ideas.

Then I opened NotebookLM, the Google tool that works based on the sources you define. The takeaway I want you to remember: if you feed your own website as a source into NotebookLM and the generated summary and mind map are correct and relevant, it means your communication is clear and coherent. If they are not, you don't have an SEO problem; you have a communication problem. It's an unusual and free way to audit your own messaging.

I also showed the components in NotebookLM Studio: infographic and video generation, which immediately generated interest. And I mentioned that the Google Prometti tool could be interesting for marketing in the future, even if it is currently in its early stages.

Anthropic Claude and vibe coding

We arrived at the Dalbe development team's favorite tool. I explained why Claude is considered one of the best models for writing and coding. We also talked about a lesser-known detail: Amazon was penalized because their AI model was trained on books, but that turned it into an exceptionally good writer. Claude inherited a similar logic, making it excellent for both texts and programming.

From there, we got to vibe coding: the current trend of building apps and components using AI without necessarily being a developer. It's not magic; it's a process shift. You know what you want to build, you describe it clearly, you iterate. I showed a recent example we vibe-coded for a client, a Shopify chatbot, to illustrate that these things are already in production, not just in theory.

Notion AI, Gamma, and Suno

I showed how we use Notion at Dalbe for planning, documentation, and project management, with integrated AI that generates templates, checklists, and structured documentation. I even opened my upcoming course plan as a concrete example. Ovidiu found Notion very relevant for organizing construction projects, especially for the checklist and phase planning aspects.

Notion Lecture Plan Example

We quickly went through Gamma.app for generating automated presentations, and Suno, the AI music generator. Suno has a concrete practical utility: it generates original music with its own lyrics, which eliminates the copyright risk on social networks. The process is simple: write the lyrics with ChatGPT, feed them to Suno, get the song. For anyone producing video content for social media, this solves a real problem.

Notion AI prompt example – construction site compliance checklist

Below is a prompt example that Ovidiu can use in Notion AI to generate a daily template for verifying site compliance according to current standards and regulations.

{
  "role": "Technical consultant in construction site management",
  "context": {
    "user_profile": "Site Manager responsible for compliance and safety",
    "project_type": "Small and medium residential / commercial construction",
    "objective": "Daily verification of works to ensure compliance with technical and safety standards",
    "priority": "Reducing errors and preventing non-conformities"
  },
  "task": "Generate a Notion template for a daily site compliance checklist",
  "structure_requirements": {
    "sections": [
      "General information (date, project, team present)",
      "Equipment and occupational safety check",
      "Materials used check (as per project and specs)",
      "Execution of works check as per plan",
      "Compliance with safety regulations",
      "Identified non-conformities",
      "Corrective actions and person responsible",
      "Final confirmation by Site Manager"
    ],
    "field_types": {
      "checkbox": true,
      "date": true,
      "responsible_person": true,
      "status": ["Compliant", "Minor Observation", "Non-compliant"],
      "priority": ["Low", "Medium", "Critical"]
    }
  },
  "extra_requirements": [
    "Include short explanations for each section",
    "Design the template so it can be easily duplicated daily",
    "Include a section for attaching photos",
    "Ensure a clear structure that is easy to read on mobile"
  ],
  "output_format": "Clear Notion structure, organized by sections and fields",
  "language": "English",
  "tone": "Practical, direct, control and accountability-oriented"
}

This type of prompt helps the site manager turn Notion AI into an operational control tool, not just a text generator. The goal isn't to write more beautifully, but to reduce risks and maintain standards.

Google Vids, Shopify chatbots, and AI at the end of the night

I also showed Google Vids, Google's video generation tool, and mentioned Shopify AI and how AI chatbots can help businesses build a functional and relevant FAQ. I showed a recent example we created through vibe coding, which was already live on a client's site.

Make, n8n, and interconnecting tools

We concluded with an overview of the Make and n8n automation platforms, which connect all these tools together. An AI model is powerful on its own. It becomes transformative when you connect it with your real workflows: an answer from ChatGPT automatically goes into a Google Sheet, which triggers a notification in Slack, which creates a task in Notion. This is no longer science fiction; it's a setup we do for clients.

Practical conclusion

The February 12 workshop confirmed something we already knew: entrepreneurs in Transylvania are curious, pragmatic, and ready to adopt tools that save them time and money. They lack a guide to show them exactly what exists and how to use it in real-world situations.

That is what we do at Dalbe at these workshops. Not theoretical presentations, but live demonstrations, examples from our real projects, and conversations about how each tool fits a specific business.

The article about the Hungarian workshop on February 19 is coming next. If you want to participate in one of our future workshops, write to us directly.

Tools we demonstrated in the workshop

Below you will find a short list of the tools we showed on the evening of February 12, plus official links. If you want to test them, start with 1-2 and stick with them until they become part of your routine. There's no point in installing them all in one day.

  • Make – automations and integrations between tools, without much coding.
    https://www.make.com/
  • n8n – more flexible automations, suitable for tech teams too, self-host or cloud.
    https://n8n.io/
  • Perplexity – research-oriented AI, good for searches with sources and quick fact-checking.
    https://www.perplexity.ai/

Short note: if you work with sensitive data, treat AI like any other external system. Avoid copying personal data, contracts, or internal information that shouldn't leave the company.

Back to blog