There are many good reasons to use synthetic data instead of real data for research purposes. These reasons may range from the business sensitiveness of real data to increased cost of collecting real data in accordance with GDPR requirements. In this paper, we elaborate upon the potentials of the Large Language Model GPT as a tool to generate synthetic data for analytical purposes when there is no real-data available or accessible. Primarily, we show that by varying the scope of probes adequately, we can generate data of different granularities. To show this, we generated stereotypical data with three levels of granularity by posing more than 18,500 probes to GPT. In total, we generated stereotypical data for eight different views, which can be categorized in three view types corresponding to the three levels of granularity. Secondarily, we show that by varying the scope of probes one can create meaningful information. To show this, we performed a so-called similarity analysis on the generated stereotypical data. We used data visualizations, e.g. heatmaps, to show the views and categories within the views that are similar and those that are at odd with each other. We elaborate upon the application areas of the insight gained about such similarities and differences. Furthermore, we discuss several other types of analysis that can be performed on the generated stereotypical data.