(1)
What is a good way to determine the amount of lags? Is it by looking at the iresp function to see if your IRF is modeled correctly? Or could you look at the Single Trial Average of your Time Series to determine the numbers for Minlag and Maxlag? Or is there an alternative method. Moreover, is there a disadvantage for using to many lags?
There is probably no simple and straightforward way to decide the lags. It could be based on past studies or knowledge about the duration of the resposne, or anything else such as the ones you mentioned above. Do realize that this is not necessarily a one-time shot since you could modify the lags and rerun the program if you are not happy with the current selection of lags.
You might think that more lags would catch any potential response duration since the worst would just be obtaining a long tail of zeros (or close to zero), but there is a cost: more lags, more parameters you would have to estimate, and less freedom of degrees left for modeling and statistics.
(2)
If doing the "pure multple regression" method, i.e., running regressors through waver first, is it always the case to set minlag=maxlag=0?
Yes, this is usually the case. Unless you have some reason to set up some delay for the response, for example, minlag=maxlag=1.
Gang