You are not logged in.
I’d like to share an interesting AI-generated podcast (created using Google’s NotebookLM) that summarizes the **mORMot 1.18 SAD document**. You can listen to it here:
https://notebooklm.google.com/notebook/ … c8f9/audio
I found it really helpful as a beginner-friendly introduction before diving into the full document. NotebookLM also does a great job answering questions about the content—think of it as a handy complement to tools like DeepWiki!
Happy learning!
Last edited by zen010101 (2025-05-10 10:52:13)
Offline
Lol
Perhaps worth generating a mORMot 2 pdf, to meet the AI beast.
yes! I did a try last month with Gemini, to generate a full project based on mormot2. I packed almost the whole mormot source and test folder with repomix, added some custom corrections I was used to and thrown a prompt of 900,000 tokens to Gemini. It has ended with a project of multiple REST servers, all the interfaces and services implementations and an openapi definition file.
I then could generate the whole JsonClient and DTOs with mormot tool. I just had to make some minors corrections - impressed as f**.
Offline
yes! I did a try last month with Gemini, to generate a full project based on mormot2. I packed almost the whole mormot source and test folder with repomix, added some custom corrections I was used to and thrown a prompt of 900,000 tokens to Gemini. It has ended with a project of multiple REST servers, all the interfaces and services implementations and an openapi definition file.
I then could generate the whole JsonClient and DTOs with mormot tool.
Sounds truly inspiring.
Could you elaborate on what's included in your "900,000 tokens" and which motmot tool you used to generate the JsonClient and DTOs? Also, when packaging with repomix, did you include the implementation parts of all unit files? (I always thought AI finds that part useless.)
Offline
Hi @zen, yes I can - let me create a repo for that.
did you include the implementation parts of all unit files? (I always thought AI finds that part useless.)
Yes because of two things. The first is that the frameworks is complex and I know models are subject to hallucinations while working with mormot; For example you can get GPT working on v2 only after corrections and sending to the models samples.
The second, I had the sentiment that Gemini had a lot of power, and with 1 million of tokens context (like GPT 4.1 now) I tested to throw the full source code. And it worked like a charm
And as the tests are written by @ab and show the right way of implementing things, then I through it was the best idea to pack the tests. FYI, packing the WHOLE framework don't work as you will get too much tokens, like 2 or 3 millions - so you must choose the best parts based on what you are trying to achieve.
Last edited by flydev (2025-05-15 07:44:09)
Offline
I posted a small readme there, sorry I had to do it fast as I am at work, feel free to ask.
Offline
I posted a small readme there, sorry I had to do it fast as I am at work, feel free to ask.
Thanks a lot! I was really troubled by GPT's hallucination issues and the fact that the mORMot framework is too large. Thanks to your guidance, I finally understand the right direction.
Also, sorry to bother you while you're working. Please go ahead with your tasks :-) I need to digest the information you've shared as it's a brand new area for me. :-)
Offline
My pleasure
I forgot to talk about the tool. To get the mopenapi tool, just compile it from there src/tools/mopenapi (blog post) and then you will be able to get a JsonClient and DTOs defintions in a second and ready to work with your API, then just throw the DTOs code on Gemini if you want them on your interfaces or whatever.
To get the JSON file from YAML, just use any YAML <> JSON converter - I will send a PR to handle YAML on mopenapi tool directly if ab accept it
Last edited by flydev (2025-05-15 08:52:04)
Offline
My pleasure
I forgot to talk about the tool. To get the mopenapi tool, just compile it from there src/tools/mopenapi (blog post) and then you will be able to get a JsonClient and DTOs defintions in a second and ready to work with your API, then just throw the DTOs code on Gemini if you want them on your interfaces or whatever.
To get the JSON file from YAML, just use any YAML <> JSON converter - I will send a PR to handle YAML on mopenapi tool directly if ab accept it
1. Regarding "the tool": Understood, thanks for the heads up.
2. As for your PR, it's another handy addition to the mORMot "arsenal"
Offline
I published a MCP server for delphi and lazarus/fpc; I am working on a special setup focused on mormot2. I must say from my tests with gemini - and now gpt-5 included - it's quite amazing. Token usage will be reduced a lot compared to "the-big-prompt".
You can find it there: https://github.com/flydev-fr/mcp-delphi
Offline
@flydev, truely helpful info about Delphi coding with AI! Thanks for sharing. And thanks to the op.
Delphi XE4 Pro on Windows 7 64bit.
Lazarus trunk built with fpcupdelux on Windows with cross-compile for Linux 64bit.
Offline
I’d like to share an interesting AI-generated podcast (created using Google’s NotebookLM) that summarizes the **mORMot 1.18 SAD document**. You can listen to it here:
https://notebooklm.google.com/notebook/ … c8f9/audio
I found it really helpful as a beginner-friendly introduction before diving into the full document. NotebookLM also does a great job answering questions about the content—think of it as a handy complement to tools like DeepWiki!
Happy learning!
Unfortunately, it sais "you do not have access to view this notebook."
Delphi XE4 Pro on Windows 7 64bit.
Lazarus trunk built with fpcupdelux on Windows with cross-compile for Linux 64bit.
Offline
I posted a small readme there, sorry I had to do it fast as I am at work, feel free to ask.
Do you mean that through mcp-delphi, the tokens input to Gemini or ChatGPT can be significantly reduced? I'm sorry, but I'm a bit confused about how this is achieved. Could you give us some hints?
Offline
Unfortunately, it sais "you do not have access to view this notebook."
Recently, Google has adjusted its audio sharing permission policy, such that only users with access to Notebook can listen to the audio. However, you can upload the mORMot 1.18 PDF yourself, create your own notebook, and then generate an audio blog.
Offline
Do you mean that through mcp-delphi, the tokens input to Gemini or ChatGPT can be significantly reduced? I'm sorry, but I'm a bit confused about how this is achieved. Could you give us some hints?
No. At this moment, the mcp-delphi server only allows AI agents to use your IDE to compile your project (Delphi, Lazarus, or FPC directly) and get the output, nothing more. That said, if your coding tool supports "auto-mode", you can let the agents create, compile, and fix the project for you - while you’re cooking, for example I tried it with gpt-5, project built successfully and potatoes where not burnt lol.
Then I thought it could be useful to have a dedicated MCP server for working with mormot. It would definitely reduce the number of tokens consumed, since the agents would be able to query a well structured knowledge base for the relevant part of the framework instead of injecting a huge prompt. I also believe the agents would "think" better as their memory context wouldn’t be overloaded.
Last edited by flydev (2025-08-17 17:42:23)
Offline
Which coding tool are you using? Could you introduce the complete workflow? In this field, I've heard that many people use Claude Code, VS Code or Cursor, but I haven't seen any examples of how to use them to write or refactor Object Pascal code. It seems that you are very proficient in Video Code in Pascal. If possible, please share more of your experience.
Offline
I mostly use cursor for frontend tasks, like building React dashboards or working with (syn)mustache templates (it actually works really well on Pascal codebases too). But I have to admit - warp terminal is a beast. I will try to make you a small demo if you’re interested.
To be honest, the real secret is to simply RTFM for each tool and configuring it properly, then taking the time to create a solid project structure (folder tree, rules, contextual prompts, etc) and calling the right model for a given task. The models to be used is now gpt-5 (truly impressive), claude-4-thinking and gemini-2.5-pro-thinking.
I'am not following ai trends closely but I'am quite sure claude-5 is in the pipe.
Offline
Thanks a lot for sharing your insights! I’d be truly grateful if you could provide a small demo showcasing your entire workflow—it would be incredibly helpful.
I’m also deeply interested in vide coding in the Object Pascal. However, I’ve always felt that current large language models aren’t very friendly toward Object Pascal. This might stem from a lack of high-quality training data. Additionally, the mORMot framework is so BIG and complex that inputting a relatively small number of tokens rarely yields effective outputs.
For these reasons, I’m closely following your practices and progress in AI-assisted coding for Pascal. I sincerely hope your work can bring a fresh wave of AI-driven innovation to the community—something that feels like a breath of fresh air.
Offline
I’ve always felt that current large language models aren’t very friendly toward Object Pascal
don't be fooled ..
What data has GitHub Copilot been trained on?
GitHub Copilot is powered by generative AI models developed by GitHub, OpenAI, and Microsoft. It has been trained on natural language text and source code from publicly available sources, including code in public repositories on GitHub.
.. and it's not the only data source; You know.. personal privacy.. checkbox.. big-data, etc...
the mORMot framework is so BIG and complex that inputting a relatively small number of tokens rarely yields effective outputs.
The main issue, IMO, comes from when most datasets were built. The first public release of mormot v2 was around late 2020 (first commit in march 2020), but by then the models had already ingested mormot v1 source code. That’s why on gpt-3, gpt-4 (and even gpt-5!), you often get v1-based code in the answers - I'am quite sure you got the TSqlModel instead of TOrmModel things, etc. which is why the “big prompt” is actually useful.
Also, if you let your tools adapt to your own habits, you’ll naturally get better results. On my side, no matter what I ask, I always end up with a marmot somewhere in the answer
Last edited by flydev (2025-08-18 07:59:26)
Offline
Yes, the answers of many large language models are based on the code of mORMot 1. However, even so, it is difficult to directly compile and run. I feel that large language models are confused by the complexity of mORMot and have serious hallucinations. It's hard for me to imagine that with the current training sets of large language models, any AI has really understood mORMot 2
Before you give further hints and demonstrations, I still need to conduct in-depth research on my own for a while. :)
Offline