Brain-computer interfaces (BCIs) translate brain activity into computer commands opening a novel non-muscular channel for communication and control. Before BCIs can be used effectively, they require extensive calibration of their machine learning algorithms using labeled training data. This tedious calibration process is not only time-consuming and costly but also restricts the exploration and optimization of stimulus parameters that could greatly enhance BCI performance. To overcome the challenge of acquiring large training datasets, a simulation framework was developed to eliminate the need for recording calibration data. Unlike previous studies, this simulation framework incorporates a biologically plausible forward model of the code-modulated visual evoked potential (c-VEP). By utilizing synthetic data generated by this improved simulation framework, an offline study was conducted to systematically compare five different stimulus conditions: almost perfect autocorrelation (APA) sequence, de Bruijn sequence, Golay sequence, Gold code, and m-sequence. The results of the study revealed that the Golay sequence achieved the highest grand average performance, followed by the APA sequence, m-sequence, Gold code, and finally the de Bruijn sequence. Furthermore, when the stimulus sequence was optimized for individual participants, typically the Golay and APA sequences exhibited the highest classification accuracy, leading to a significant improvement in overall accuracy as compared to using the Golay sequence for all participants. This research represents an important initial step towards optimizing stimulus parameters for BCIs using simulated data. This approach has the potential to accelerate the development and optimization of BCIs, resulting in more effective applications that can be tailored to individual users.