Read the Article in Book Format – Click here!
“Let’s Get Used to Meta” Vs “Let’s Train Meta”
Earlier, I mentioned two phrases — “Let’s get used to Meta” and “Let’s train Meta.” Let’s understand the difference.
When we talk to Meta and ask it something, we’re using a prompt. A prompt is simply the instruction or question we give. Meta tries to understand the prompt and gives a response. If it doesn’t understand fully, it tries to give the closest possible answer. And we can keep asking it again and again until we get the answer we want.
Meta works across platforms like WhatsApp, Facebook Messenger, etc.
- Saying “let’s get used to Meta” means we should learn how to use Meta by asking questions or giving commands.
- Saying “let’s train Meta” means we teach Meta to understand us better — our style, our needs, and how we communicate — so that it gives more suitable responses.
What does “our style” mean? How can we make Meta understand it?
Imagine your child has to write an essay for a school competition. If you write it for them, using big words and mature ideas, the teacher might ask, “Did you write this or did your parents help you?” because the writing doesn’t match your child’s age or level.
But if you write in your child’s natural style — using simple words and sentence structure like they usually would — it will feel genuine, and the teacher won’t doubt it.
The same idea applies to how we interact with Meta.
If you give Meta one or two examples of essays your child wrote before, and then ask it to write a new essay in the same style, Meta will analyze the example and generate a new essay using similar words, sentence patterns, and tone — making it feel like your child wrote it.
Whether it’s Meta, ChatGPT, Gemini AI, or any other tool, they work based on the input we provide. The more we use them, the more they understand our preferences and style and improve the output.
That’s why these platforms often show two icons after each answer: (thumbs up) and (thumbs down).
If you click the thumbs-up, the AI notes that it gave a good answer and learns from it. If you click thumbs-down, it knows it needs improvement. This feedback helps the system grow and give better responses in the future.
A Real-Life Example:
A few months ago, I was invited to speak at a university in Karaikudi about AI. During the session, a professor from American College, Madurai asked me a question:
Professor’s Question:
“In AI art generators, even when we clearly describe traditional Indian symbols — like the ‘pattai’ mark for Shaivites or the ‘naamam’ for Vaishnavites — the AI often ignores these details or applies them incorrectly. Why is that?”
My Answer:
AI only works with the data and examples that have been given to it. If the training data doesn’t have enough clear examples of such religious symbols and their correct usage, the AI won’t understand them well. It compares your prompt with what it already knows, and only then creates output — whether that’s text, image, sound, or video.
To make AI give us exactly what we expect, two things are important:
- We must know how to ask (prompting).
- The AI must already have enough data related to our question.
It’s like this: You can ask a doctor medical questions because their brain is filled with medical knowledge. But if you ask them mechanical engineering questions, they might not be able to answer accurately — because that’s not their area of expertise.
Likewise, an AI can only speak in depth about a topic if that topic’s information has already been fed into it and it has been trained on that subject.
The Same Logic Applies to AI Tools
AI learns only when we feed it information. The more data we give it, the more accurate and useful its responses become. Also, the more people use AI tools, the more experienced they get.
For example, if someone asks AI to generate an image of Lord Krishna, it looks at all past requests and references where Krishna’s image might have included blue skin, a flute, a peacock feather, etc., and then creates a new image using those details.
This is an AI-generated image created using Microsoft Copilot.
To make AI tools smarter in understanding Indian culture, literature, language, and art, we must input more of this data into them. Only then will the AI learn how to respond better to culturally specific questions. Just building AI tools is not enough — they need proper training and regular use to be truly effective.
Conclusion:
We must teach AI, use AI often, and challenge it with different ideas. Only then will it become intelligent and capable. Our researchers should focus more on developing and training AI using rich Indian content, so that AI can truly understand and respond in ways that reflect our traditions, knowledge, and style.
—***—