Copy link to clipboard
Copied
The Bard beta program is by invitation only but anyone can apply. After submitting my name to the wait list, a few minutes later I received an invitation to try Bard.
First I asked Bard to write some promotional copy. Bard generated a flowery but somewhat vague description of a fabulous island vacation at a luxury resort. The copy was solidly written with lots of appeal but not quite what I was looking for.
Next, I asked Bard for a promotional blurb for a mythic tangible product. Once again, the copy was well-written, very promotional and abundantly usable for a sales campaign.
I then asked Bard to write a critical wine review of vintage 1970 Heitz Cellar Martha's Vineyard Cabernet Sauvignon (if you can find it at auction, it costs about $1,900/750ml bottle). Bard informed me that he has no olfactory senses or taste buds and can't appreciate wine. But within a few seconds, I was presented with an articulate & accurate description of the characteristics of 1970 cabernets from the Napa Valley region -- complete with flavors, aromas, texture, finish and longevity. It was almost as good (maybe better) than some of the reviews written by famous wine critics. All in all, not too shabby considering Bard has no idea what wine is.
I think Bard is useful for overcoming creative mind-block. When you're staring at a blank-page and short on creative inspiration, Bard can give your brain the jump start it needs.
Copy link to clipboard
Copied
What is the old saying? (something like) Give a million monkeys typewriters and they will eventually bang out a novel?
Copy link to clipboard
Copied
The main difference between Bard & monkeys (besides the mess they make) is the amount of time it takes. Bard can create a manuscript in seconds.
I was talking with a law professor at USC. He allows his law students to use AI in the classroom. He says AI Chat is as common a tool in practicing law as a calculator is in an accounting firm.
Copy link to clipboard
Copied
For law students (and lawyers) a chat AI program is probably very valuable, as it saves time researching precedent. But you still need to check all that data manually. For a literary student, you will have a different situation.
Copy link to clipboard
Copied
Just saw an article about Buzzfeed. They're shutting down their news division, but trying to make up losses by using AI to write articles.
Copy link to clipboard
Copied
Just saw an article about Buzzfeed. They're shutting down their news division, but trying to make up losses by using AI to write articles.
By @Chuck Uebele
===========
AI articles can't be protected by copyright.
For more AI developments, watch this interesting video (23 min).
Copy link to clipboard
Copied
AI articles can't be protected by copyright.
By @Nancy OShea
...in the USA. My AI articles are (c) protected, also in the USA. I just need to prove the creation of a reasonable sophisticated prompt. "Write a letter to Nancy" would not be enough:
Dear Nancy,
I am writing to you to express my gratitude for your support and guidance during my time at ABC Inc. You have been a great mentor and friend to me, and I have learned a lot from you. I appreciate your kindness, patience, and professionalism. You have always encouraged me to pursue my goals and challenged me to grow as a person and as an employee.
I am sad to leave ABC Inc., but I am also excited for the new opportunity that awaits me at XYZ Ltd. I hope we can stay in touch and continue to share our experiences and insights. You can reach me at my personal email address or phone number anytime.
Thank you again for everything you have done for me. I wish you all the best in your future endeavors.
Sincerely,
Your name
Copy link to clipboard
Copied
OK. So if AI is the creator with no human lifespan, at what point do we start counting plus 70 years? 😕
Copy link to clipboard
Copied
The issue l have with AI is it breeds laziness which will lead to a lot of miss information. You obviously checked what was sent back to you to make an assessment of how accurate the information was. lm not so sure that is going to be the case in many instances. Used like anything else it can help but abused it can result in negligence which may get you into a lot of trouble if you published miss information, particularly in cases of sensitive information.
Like I said in a previous post, one of the AIs accused someone of being a murderer when they queried it about an incident which happened some years ago. What l would question is who is accountable for that? I would assume the person who was being accused has a good case to sue the company who produced the AI for millions of pounds.
Copy link to clipboard
Copied
Like I said in a previous post, one of the AIs accused someone of being a murderer when they queried it about an incident which happened some years ago.
By @osgood_
==========
I must have missed that post. Misinformation is highly subjective. It's based on one's perception and available data. In the absence of reliable information, confabulation occurs. Flat-Earth theorists firmly believe the world isn't round. All AI machines hallucinate, an odd quirk for which programmers have no explanation.
Imagine a self-learning machine gives a potent breakthrough formula to a scientist.
Based on available information, who is the saint & who is the sinner?
Now let's further imagine that
Now who are the saints & who are the sinners?
The more information we have to draw from, the more likely we are to arrive at better conclusions. It's no different for AI.
Insofar as AI being good or evil, it's neither. It's merely a tool, no more good or evil than a 3D printer. AFAIK, nobody has successfully sued a 3D printer (yet).
Copy link to clipboard
Copied
I'd still like to know who is responsible for defamation of character and slander, both of which can be legally challenged. I'm sure if your kids or grand kids in the future used an AI and questioned it about yourself you would not like the results if it implied you were a murderer, when it was not true. Companies who produce these AIs must check the facts and be held accountable if their robot is incorrect, train their AIs not to provide what could be considered sensitive and disturbing information, information which cannot be verified or information which hasn't been checked via multiple sources, otherwise it is they who should be responsible and be prepared to pay the consequences.
AIs can be useful in none critical information gathering, for assistance only, but it's treading on dodgy ground in my opinion to make stuff up from unreliable sources in cases where a user requires factual information.
Copy link to clipboard
Copied
In the U.S., defamation is very hard to prove and rarely goes to court.
1. Proof of false or misleading statements made publicly.
2. Proof of malicious intent.
3. Proof of damages.
If all 3 criteria aren't met, you can't prove defamation. Therefore an hallucinating AI or human with mental illness can't be sued for things they say as they are not of sound mind.
However, if a 3rd party news service publicly airs false information derived from an unreliable source without the facts to back it up, you might have a case against the news service, provided your side can meet the 3 criteria.
Copy link to clipboard
Copied
Therefore an hallucinating AI or human with mental illness can't be sued for things they say as they are not of sound mind.
Is the company behind an hallucinating or mentally ill robot not responsible for its well-being and therefore should they not be responsible if they choose to let their creation loose on the public?
I really dont have much time for AI as in many cases its proven to be 'not fit for purpose' and unfortunately, what is even more scary, neither are the majority of people that may use it and hang on its every word. Put both together and its pretty concerning of what could be said and suggested.
Copy link to clipboard
Copied
I'm not losing sleep over AI. It's not a path to "End Times" as some might have us believe.
Naysayers of AI remind me of early opponents of personal computers and the first iterations of Photoshop. Critics said computers would put humans out of work. And Photoshop would put artists out of work. It didn't happen then. It won't happen now.
If anything, computers & PS revolutionized content creation and opened up new jobs for humans to fill. AI's ultimate success & impact on society will depend on how humans use it & regulate it.
Copy link to clipboard
Copied
AI's ultimate success & impact on society will depend on how humans use it & regulate it.
By @Nancy OShea
That's what troubles me, l have no faith in the human race, it abuses everything and anything it can.
I think the one positive, from what l've seen at least, you have to know what to ask an AI robot to get it to provide you with anything usable, particularly in reference to coding, which requires some knowledge, at which point you probably know what you're doing anyway, so l just view it as another walking stick used by those who aren't professional and are just hacking things together.
It's still a concern that there is a very real possibility that it becomes a source for the lemmings out there though, of which there are more, than not, who will use it to make a quick buck regardless of if the information is correct or not.
Social media has proven to be a bad idea since its conception in my view, as has youtube to a certain extent, who are doing their best to ban/suspend those channels which post mis-information from crackpots who say they are being denied free speech but are really just in it for maximum views and the revenue.
Copy link to clipboard
Copied
crackpots who say they are being denied free speech but are really just in it for maximum views and the revenue.
By @osgood_
=========
Sadly, you've just described our U.S. House of Congress. The more outrageous things they say, the more TikTok views they get. It's a sickness.
If I have fears about AI it's that it might have been trained on social media garbage.
Copy link to clipboard
Copied
I'm concerned about fact checking. Who is going to have the motivation, intent, or resources to check the IA articles? If you read stories, now, from different sources, they are pretty much the exact same story, pulled off some service. Who is going to check, and how: look stuff up on the web, to find the same IA generated articles or information?
Copy link to clipboard
Copied
Who currently fact checks articles written by humans?
The same rules apply.
Copy link to clipboard
Copied
That's kind of my point. Will actual humans fact check things? They're not doing the best job now.
Copy link to clipboard
Copied
That's kind of my point. Will actual humans fact check things? They're not doing the best job now.
By @Chuck Uebele
You've hit the nail on the head. The more widely available these kinds of solutions become to the general public the more potential for incorrect/mis information to be spread. Let's face it the majority of humans are generally lazy no good for nothing imbeciles, just out to make money, have no pride and even less morales. Many will just take what an AI returns at face value, without going through rigorous checks.
Copy link to clipboard
Copied
Last time I wanted to access Bard, it told me, that access was limited to the US and UK!
(Which I find highly unfair, as neither the US and the UK are important countries, IMHO 😂. That lets me use only Bing GPT).
Copy link to clipboard
Copied
All countries are important. But Bard is in early experimental tests.
They may be rolling it through English Only to see how it reacts.
Copy link to clipboard
Copied
😂I'm writing English. If they wanted only to have English, they would limit to English. I suppose they want to avoid another “Italy” as for ChatGPT.
Copy link to clipboard
Copied
It is funny I could join the waitlist with my work GMAIL account...but not my personal one.
For my personal one it says it is not available in my country. But the funny thing is my work email address is also from Germany.
Copy link to clipboard
Copied