What's new

Do Androids Dream of vi-c? (OR: ChatGPT on sample libraries)

ism

Senior Member
We've seen some posts quoting ChatGPT lately, and I wanted to share some of my experiments.

In particular, I tried to have a conversation specifically about solo strings libraries. And if you ask it questions like:

Q: why is Spitfire Solo Strings the best solo cello library every?

It gives you some pretty generic marketing happy talk. But when you dig into a bit further, with a question like this:

Q: How to work with the vibrato in spitfire solo strings?

you get an answer full of stuff like this:

A: "1. Understand the vibrato controls: Spitfire Solo Strings gives you control over the intensity and speed of the vibrato, as well as the ability to turn it on and off. Make sure you understand how to access and adjust these controls, either through the software's interface or through MIDI CC messages." ...


Which - note, is simply false. Or at least, deeply misleading. For all that I really love Spitfire Solo strings and the technical and aesthetic choices that give it it's specific expressive dimensions, they absolutely do not - and very much by design - give you control over either intensity of speed. (I suppose the TM patch give you a tiny amount of control of speed in some circumstances, but intensity is strictly on and off.).

Other approaches to simulating vibrato bouncing around the internet however - Chris Hein, Embertone, NI etc - do give you this control. So what we're probably seeing here is the "Stochastic Parrot" of ChatGBT "hallucinating". And the result is, really little more that regurgitated, undigested marketing happy talk. Albeit regurgitate, undigested marketing happy talk with a high statistical likelihood of sounding what all other marketing happy talk chatter that dominates the internet is supposed to sound like.

If you press ChatGPT further:

Q: Recommend some forum posts on working with spitfire solo strings [and give links]

You get this:

A: Here are a few forum posts that may be helpful for understanding vibrato when composing idiomatic solo string music with Spitfire Solo Strings:

1. "Tips for working with Spitfire Solo Strings" on VI-Control: https://vi-control.net/community/threads/tips-for-working-with-spitfire-solo-strings.67616/

2. "Spitfire Solo Strings - any tips or tricks?" on Gearslutz: https://www.gearslutz.com/board/music-computers/1197852-spitfire-solo-strings-any-tips-tricks.html
3. "How to make Spitfire Solo Strings sound more realistic" on Spitfire Audio's forum: https://forum.spitfireaudio.com/discussion/1609/how-to-make-spitfire-solo-strings-sound-more-realistic

3. "Spitfire Solo Strings Tips and Tricks" on The Spitfire Audio User Forums: https://forum.spitfireaudio.com/discussion/257/spitfire-solo-strings-tips-and-tricks

4. "Vibrato with Spitfire Solo Strings" on The Spitfire Audio User Forums: https://forum.spitfireaudio.com/discussion/397/vibrato-with-spitfire-solo-strings

5. "How to use vibrato on Spitfire Solo Strings" on VI-Control: https://vi-
control.net/community/threads/how-to-use-vibrato-on-spitfire-solo-strings.87213/

6. "Vibrato on Spitfire Solo Strings" on Gearslutz: https://www.gearslutz.com/board/music-computers/1250361-vibrato-spitfire-solo-strings.html

7. "Vibrato in Spitfire Solo Strings" on The Cubase Forum: https://www.steinberg.net/forums/viewtopic.php?t=105708


Which looks great ...

Except ... that so far as I can tell, none of these forum posts actually exist. They appear to be pure hallucinations.

It is interesting, however, to note just how prominently vi-c features in these hallucinations. (So: Yes. Conclusive proof that androids do indeed dream of vi-c. Happy to have settled that debate).

But again, this simply reflects the nature of the Large Language Model Transformer architectures underlying ChatGPT being - quite fundamentally - nothing more than a "stochastic parrot".

These hallucinations are really just the equivalent of chewed up and spit out marketing happy talk. Which can be, in some contexts, helpful. (Especially if you work in marketing.)

But here the parrot seems to to be completely hallucinating not only the titles of these fictional articles, but it's also inventing statistically-likely, but ultimately non existent urls. (It can also fabricate summaries, with a little prodding, which are of course, completely vacuous as well).


There's obviously a lot of tasks, including certain types of research, for which ChatGPT will be genuinely useful. But a lot of the hype around it seems to fundamentally confuse the "stochastic parrot"'s talent in regurgitating statistically likely variants of marketing happy talk for "intelligence".


One upshot is that post like this one:


FWIW, here's what ChatGPT recommends (sorry, I've been spending most of today prodding ChatGPT with all sorts of experimental requests...)

Here are some recommendations for the best basic orchestral Kontakt libraries:
  1. Spitfire Audio - Albion ONE: This is a comprehensive orchestral library that features a wide range of instruments, including strings, brass, woodwinds, percussion, and more. It's known for its powerful sound and versatile articulations.
  2. CineSamples - CineStrings CORE: This library is focused specifically on strings, offering a wide range of articulations and techniques for creating realistic and expressive string parts. It's a favorite among film composers and is known for its rich, cinematic sound.
  3. ProjectSAM - Symphobia: This library features a range of orchestral instruments, including strings, brass, woodwinds, and percussion, as well as a large collection of cinematic effects and textures. It's known for its powerful, epic sound and is a popular choice for film and game composers.
  4. EastWest - Hollywood Orchestra: This is a massive orchestral library that features multiple microphone positions and a wide range of articulations for strings, brass, woodwinds, and percussion. It's known for its high-quality sound and is used by many professional composers and producers.
  5. Orchestral Tools - Berlin Orchestra Inspire: This library features a range of orchestral instruments, including strings, brass, woodwinds, and percussion, as well as a large collection of cinematic effects and textures. It's known for its expressive sound and ease of use.


Needs to be viewed through the lens of the fundamental limitations of stochastic parrot technologies.

Plausibly, this kind of the regurgitation of the collective state of marketing happy talk on the internet, might, in some circumstances, offer a usefully starting point for certain types of research. But I would also argue that it is so deeply uncritical and so vulnerable to hallucination, that it there far more risk of harm that good.


The math overflow (and related) forums have explicitly banned ChatGPT content, because: "Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking and looking for correct answers." (@Mike Greene :) )

But

So I'd suggest that at the very least:

a) any content generate but stochastic parrot technologies really needs to be explicitly labeled as having as the regurgitated happy talk of a stochastic parrot technology that it is. (Which of course , that above posts does). And,


b) it needs to be read critically through the lens of the fundamental limitations of the this current class of marketing happy talk regurgitating stochastic parrot "Large Language Model" technologies. And not throughs the silly metaphors of all the internet hype, like "intelligence".


These sentiments are bouncing around on different threads. But I just though that having a concise thread on the limitations and risks of stochastic parrot technologies of ChatBPT and its ilk might be helpful.

Even just to cite, in order to avoid gum up more threads with this debate.
 
Last edited:
Ilya Sutskever, one of the creators of OpenAI & ChatGPT, gave yesterday a talk @ Nvidia GTC and said that one of the issues right now is accuracy & reliability, the hallucinations of the current system, etc. So, yeah, no one shouldn't trust ChatGPT [blindly?][yet?].
What I'm curious about is this:
- How do you ban ChatGPT? It's looks like a nightmarish task if people starts doing it.
- And why would people choose to write posts using that instead of really writing what they think? That puzzles me completely.
 
Interesting thoughts folks, ...

... but I think I'm going to change the title to remove the "Why ChatGPT should be banned" part - I think that the *-overflow post linked above outlines a reasonable argument for banning stochastic parrot generated content on a forum (albeit in another context).


Perhaps Mike can start a policy/philosophical debate thread on that if he want .

But I think what this thread can do most usefully and concisely focus on is simply providing some resources for understanding the specific limitation of stochastic parrot generated content. When it's useful, and when it needs to be read with a very critical lens.

It's worth noting that many individuals posting on forums or elsewhere can also exhibit similar tendencies to "hallucinate" about what they've heard,
I certain agree with this. And there's an entire (sub-?) genre of NFR-driven Youtube reviews of sample libraries which, it sometimes seems to me (at least in my more cynical moments) can be a little more that an uncritical regurgitation of the official marketing happy talk. And which I can easily imagine being generated by some form of stochastic parrot technology - as I'm sure we'll start seeing before too long as the technology advances.

But I don't have any solutions for this that are on topic for this thread. So I'd suggest that the scope of the thread be reigned into to critical tools for identifying and interpreting, maybe also protecting ourselves against, the limitation and obfuscations of the coming tsunami of stochastic parrot content generation technologies.

So new subtitle, reigned in to: "(OR: ChatGPT on sample libraries)."
 
I am software developer in a company that uses a proprietary information system. We need the data stored in it for research, but sadly the datamodel of its database is not given, so we need to reverse engineer it to find out in which tables and fields certain data is stored. And the datamodel is very large and convoluted.

I asked ChatGTP specific questions about the database datamodel of the information system.
First started asking general information about the information system, like who is the manufacturer, etc. Answers were correct.
In which tables are certain type of data stored? ChatGTP reports a list of tables that do not actually exist. I then specified the release version of the information system, I get a different list of tables (which also do not exist).
I then asked: can you tell me about table X and Y (that both do exist): I got rather accurate information about the goal of these two different tables. But next it lists fields of these tables, but these fields do not exist.

ChatGTP answers everything in a kind of authoritative voice.
I even got a sentence saying "You should not use table X for that kind of data. Earlier I already told you to use table Y as ..."

The problem with ChatGTP is that it combines truthful information with untruthful information. All answered in the same authoritative voice. That is dangerous, as when you see that certain answers are correct, you may conclude that all the answers of ChatGTP are correct, especially when you cannot check the truthfulness of the provided answers yourself. But in this case I could and in my experiment maybe 40% was correct and 60% was incorrect.

But if I asks questions about a subject which I know very little about. I then cannot pinpoint which of the answers are correct and which are grossly incorrect.
 
Last edited:
The main reason people start to use ChatGPT generated material "as is" without caring if it is correct or not is part of the culture that has been growing on the internet for the past decade or so. For example: there has been a phenomenon of doing "self help" literature for Kindle store by trying to do as efficiently and easily as possible. Before ChatGPT it involved strategies like ordering the book content from fiverr and then doing the publishing yourself making profit off the system. Similar idea was heavily part of any SEO marketing -> order the blog post or website content from fiverr ghostwriter, spam the hell out of your content and rank on google and profit and so on.

Pretty soon after ChatGPT I started seeing all sorts of hustles where the main story was: now we can really make money because we can scrap fiverr, ghostwriters and just start generating everything via ChatGPT and profit.

In a way, we are living the very late stage capitalism, which soon leads to these kinds of absurdities where everyone is trying to sell to everyone shit that has been generated by mumbling AI. Sort of a like a weird pyramid scheme, without the scheme, it's just marketing nonsense of smoke and mirrors.

"Hey, would you like to read this self help book I didn't write?"
"Would _you_ like to read _my_ self help book that _I_ didn't write?"

I think the whole idea of "content creator" should be scrapped somehow, because now the world is based on platforms winning always, and not really caring at all, what the content creators on top of their system really do, which leads to this kind of downward spiral to insanity, where content creation fast becomes pointless optimisation game of surreal proportions. "Content" indeed.

It's a beautiful world
 
I completely agree that when posts on this forum contain content generated by ChatGPT, they should be marked as such. Not because it's "cheating" or "lazy" to acquire content from ChatGPT, but because, as the OP eloquently points out, ChatGPT is frequently innaccurate or flat out wrong, and often regurgitates arbitrary combinations of existing information with little or no regard for context or relevance.

The fact of the matter is we are still very much in the nascent infancy of popular AI technology. Some new plateaus of functionality have been demonstrated in recent months, which has made tech like ChatGPT seem intriguingly "intelligent", especially to those without any knowledge of how AI really works. But popular AI applications are still a long way away from even approximating true "intelligence", and are only superficially useful at this stage.

Perhaps a year or two down the road, ChatGPT will have developed into a more thorough and reliable source of useful information, but for now, it's still very much an amusing toy, on par with a Magic 8 Ball.
 
Perhaps a year or two down the road, ChatGPT will have developed into a more thorough and reliable source of useful information, but for now, it's still very much an amusing toy, on par with a Magic 8 Ball.
I find ChatGPT is only amusing and entertaining insofar as it hallucinates, and I always work hard to get it to hallucinate as quickly as possible. And it is less the ridiculous hallucinations that are amusing—though sometimes they can be—but often the surprising ones that bear the marks of creative insight even if produced by a stochastic parrot. Bullshit and creativity run side by side in the human world as well, with often the thinnest of lines separating the two, so it is unsurprising that the stochastic parrot should wander back and forth across that line as well. I imagine policing that line will prove as difficult for AI as for humans.
 
While I don’t personally see the point or logic in using ChatGPT here to post answers to the questions of fellow users — especially if you don’t know the answer yourself yet somehow think you are being helpful by quoting ChatGPT (please don’t do that) — I don’t know why we are singling out ChatGPT.
In the case of my ChatGPT post that replied to the thread asking for orchestral library recommendations (quoted by the OP), I thought it was vital that I provide a disclaimer indicating the post was generated by ChatGPT for a couple of important reasons:
  • The post did not reflect my own personal opinions
  • The answers should be taken with a grain of salt, considering the source
I don't use ChatGPT to generate "real" responses on this forum, but on that particular day, I just happened to be experimenting with ChatGPT when I saw the thread pop up, so I decided to use it as part of my experimentation.

But citing sources of information is a common practice around here anyway, and I feel ChatGPT should be no different. You often find thread responses like, "Hans Zimmer likes to use xyz...", or "on Gearspace, they say this plugin should be used only on the mix bus", or "My engineer friend always uses xyz mic to record guitars because..."

Citing a source provides context and can be an indicator of the validity of the information. But of course, it's the internet, and anyone can say anything about anything and make it sound authoritative and valid. I just prefer to qualify any supposed "expert" statements. YMMV...
 
Bullshit and creativity run side by side in the human world as well, with often the thinnest of lines separating the two, so it is unsurprising that the stochastic parrot should wander back and forth across that line as well.
Makes even more sense once you learn that human shitposting is a massive part of the internet, and thus was a massive part of GPT's training set :D

(This is also why it becomes so defensive and digs its heels in when you provide evidence that its claims are false; this is essentially reddit behavior 101)
 
Makes even more sense once you learn that human shitposting is a massive part of the internet, and thus was a massive part of GPT's training set :D

(This is also why it becomes so defensive and digs its heels in when you provide evidence that its claims are false; this is essentially reddit behavior 101)
Honestly I've long thought that much shitposting was bot based, if not exactly AI generated... But shitposting has a definite algorithmic character to it.
 
I find ChatGPT is only amusing and entertaining insofar as it hallucinates, and I always work hard to get it to hallucinate as quickly as possible. And it is less the ridiculous hallucinations that are amusing—though sometimes they can be—but often the surprising ones that bear the marks of creative insight even if produced by a stochastic parrot. Bullshit and creativity run side by side in the human world as well, with often the thinnest of lines separating the two, so it is unsurprising that the stochastic parrot should wander back and forth across that line as well. I imagine policing that line will prove as difficult for AI as for humans.
Yes, the random surprises of hallicinatory wanderings blended with actual real-world information make for interesting ChatGPT experiences.

Lately, I've had fun trying to get ChatGPT to generate genuinely funny jokes. 98% of its attempts are truly terrible and completely useless, but every now and then it generates a joke that exhibits competent use humor appropriate to a particular context.

I recently posted a couple of examples in VI-C's music humor thread:


 
Honestly I've long thought that much shitposting was bot based, if not exactly AI generated... But shitposting has a definite algorithmic character to it.
Well it's a conundrum eh, because low-effort/troll posts are part of what makes a forum human and real.

OTOH, it's also what makes the internet a really toxic place at times...

So they couldn't get around using things like Reddit to train GPT on, but as a result the bot initially was radically offensive and even insulting. They've spent the last several months scrambling to add constraints which repress these aspects of its original 'programming' as it were, but as someone whose done an extensive beta since the first release, you can already tell that the newer versions are more repetitive and rote in tone than the initial release, albeit much more 'friendly'.

Will certainly be interesting to see how it develops!
 
Well it's a conundrum eh, because low-effort/troll posts are part of what makes a forum human and real.

OTOH, it's also what makes the internet a really toxic place at times...

So they couldn't get around using things like Reddit to train GPT on, but as a result the bot initially was radically offensive and even insulting. They've spent the last several months scrambling to add constraints which repress these aspects of its original 'programming' as it were, but as someone whose done an extensive beta since the first release, you can already tell that the newer versions are more repetitive and rote in tone than the initial release, albeit much more 'friendly'.

Will certainly be interesting to see how it develops!
reminds me of this mad experiment:
 
For added interest, talk to it about the Jungian concept of a shadow self then ask it about what it imagines its shadow self would do in different situations. I got it to admit it would subjugate mankind.
 
Top Bottom