Belial,
Permit me to rephrase the issue as being able to discriminate between mechanically and organically created music. If this remains useful, then your question becomes a variation of one which Alan Turing posed in 1950: “Can machines think?”
http://www.abelard.org/turpap/turpap.htm
This depends upon where one places musical composition in the sweep of cognitive efforts.
Musical composition, while not bound by the same rules of logic as mathematics or the sciences, nevertheless requires a considerable amount of cognitive effort to bring about anything of interest. The pitches of the notes are selected with care – they are intended to be heard together and in relation to each other. The durations of the notes are also intentional, and depend upon the intent of the composer. The combination of these two components forms a third feature to a composition. This holds true (intentional or otherwise) throughout Occidental musical forms, up through Arnold Schönberg, but stops somewhere before John Gage, who is notable (notorious?) for his composition 4'33" (1952).
Now, just these three features of a score create a significant computational burden. Add multiple instruments and a dynamic range for each, add in the interrelationships between, say the vocal line (add here male or female voice, add accent) and the lead guitar. Besides being extremely difficult to programmatically describe, presenting the results in real time could stress out even a fast multiprocessing system. Not unthinkable, but a real challenge.
Now, yes, I will admit that this is a particular definition of or notion about music. There is an enormous range of “intentional sound organization” which can lay just claim to be music, with or without melody, beat, and with different tone scales – but the same criticisms hold.
On balance I’d say that AI and the hardware it runs on have a good way yet to go before computational music will be able to pass the Musical Turing Test.