ChatGPT and CSS and apple script

Did you see this AI chatbot can write CSS, and apple script? Just provide a text description on what you want to accomplish and voila. Would be interested in what the power users think.

Are you able to help with css stylesheets?

Yes, I can help with CSS stylesheets. CSS (Cascading Style Sheets) is a stylesheet language used to describe the look and formatting of a document written in a markup language. It is most commonly used to style web pages written in HTML and XHTML, but can also be used with other markup languages.

Can you write a style sheet that can adapt to both dark and light modes. And have nicely formatted tables, with a pleasing margin?

Sure, here is an example CSS stylesheet that can adapt to both dark and light modes and includes styles for nicely formatted tables with a pleasing margin:

/* Adapt to light and dark modes */
@media (prefers-color-scheme: light) {
body {
background-color: #fff;
color: #000;
}
}

@media (prefers-color-scheme: dark) {
body {
background-color: #000;
color: #fff;
}
}

/* Table styles */
table {
margin: 24px;
border-collapse: collapse;
}

th, td {
padding: 12px;
border: 1px solid #ccc;
}

/* Make the table header bold and centered */
th {
text-align: center;
font-weight: bold;
}

1 Like

Seriously? An “AI” that utters margins and paddings in pixels? In 2022? Now I know why I prefer HI to AI. Especially, since this program behaves like the 70-year-old Eliza in throwing back its requirements at the user. And Eliza didn’t even pretend it were “intelligent”.

And a light/dark stylesheet that simply inverts black and white for foreground and background colors is a bad joke, at best, too.

Can you change pixels to rem please?

Sure, here is the updated code:

Copy code
/* Adapt to light and dark modes */
@media (prefers-color-scheme: light) {
body {
background-color: #fff;
color: #000;
}
}

@media (prefers-color-scheme: dark) {
body {
background-color: #000;
color: #fff;
}
}

/* Table styles */
table {
margin: 2.4rem;
border-collapse: collapse;
}

th, td {
padding: 1.2rem;
border: 0.1rem solid #ccc;
}

/* Make the table header bold and centered */
th {
text-align: center;
font-weight: bold;
}

In this code, I have replaced the px with rem in the border property of the th and td elements. I also made a few other changes to improve the readability and formatting of the code

That’s teaching a child – tell them what’s wrong. Since you know what’s wrong. Will it be rem next time?

And how are you going to tell it about dark mode?

I probably don’t get the point of asking a piece of software to create code that any knowledgeable human would create a better version of.

1 Like

While I understand @chrillek’s perspective, I think it makes sense to look at ChatGPT’s current coding capabilities in the context of the trajectory the technology might take.

If this improves by 25% year over year for 5 years, which is definitely a reasonable assumption, then a lot will be possible in the near-future.

Also, not everyone is a knowledgeable human when it comes to coding :sweat_smile: (or cares about the code being aesthetically perfect if it does what it’s supposed to, by and large … )

I’m probably dense here, but what is the purpose of coding without being knowledgeable?

In my mind, the purpose of writing CSS is to achieve something that

  • works
  • is technically up-to-date

Which for CSS in 2022 means providing for different device sizes, using relative units whenever possible, being aware of the vagaries of dark vs. light mode (which, for example, would mean to exchange a light with a bold font) etc.

The interesting question (for me) would be what “improves” means. In order for this six-year-old to become a knowledgeable coder, someone would have to explain things to it. Like “if you see a navigation element, display it on larger devices and replace it with a hamburger menu on smaller ones” or “leave out the unnecessary stuff on smaller devices”. Wait – what exactly is unnecessary? BTW, there’s rarely a standalone CSS – it is nearly always related to an HTML document.

In the 80s (last century) the German carmaker Volkswagen was working on what was then called an “expert system”: a piece of software that should make it easier to diagnose problems in their cars. All they needed to do (in their mind) was to pick the brains of the car repair people and feed all their knowledge into the expert system.

The software never saw the light of day. Because there’s no simple (or even complicated) mechanism to pick the brain of experienced people and distill their knowledge into an algorithm.

1 Like

Well, one example would be to enable some workflow in DevonThink. A user wants it to work a certain way but has no prior experience with scripting. He finds some code in the forums and plays around with it until it produces the desired result. He knows about the functions he needs but is not per se knowledgeable about coding. It could definitely be useful if such a user could just tell ChatGPT “write a custom plugin for DevonThink that does x” and then use that code.

Yes, your point being that AI kind of gets better over time but without real qualitative improvements, correct? Sort of a horizontal evolution but not vertical?

Probably you have already, but if not I’d encourage you to just ask ChatGPT something. It’s quick and free to sign up for an account. Then respond to ChatGPT’s response with a more detailed follow-up question and be amazed at its intelligence… It’s really quite incredible what it can do; it understands nuances.

So there’s an argument to be made that, on this basis, the same might happen within a couple years when it comes to creating code via ChatGPT. I.e. the qualitative dimension that you (probably with good reason given your knowledge) feel is currently lacking when it comes to code generated via ChatGPT.

I’m not personally knowledgeable enough to answer this. But definitely an interesting question, as related to AI-generated code.

1 Like

I guess that this could easily start an endless discussion. Some claim it’s intelligent, others disagree. The only thing that most people could probably agree to is that it’s impressive.

1 Like

Free as in beer. I stopped the process when they asked for my phone number. WTF?

So, for the time being, no chance for me to try it out.

2 Likes

Same over here.

2 Likes

Interesting. I consider myself a privacy-conscious person, yet didn’t really have a problem with providing my phone number to OpenAI.
Then again, I’m not giving them any other data apart from random questions to ChatGPT, so don’t really see an issue… :face_with_monocle:

But why would they need your number?

An OpenAI account gives access to both ChatGPT and Dall-E.

In Dall-E you get a limited number of free credits to generate images based on text input, then there is a fee for additional ones. So I guess they need to verify that users don’t simply create a bunch of accounts.

Perhaps.

but what is the purpose of coding without being knowledgeable?

My two cents related to this… the thing that irks me is how people will wrongfully say they’re ‘coding’ when using such tools. They’re doing no such thing. There’s no learning, no craft, no expertise on the users’ part. Just as AI generated art is not made by someone.

Many democratizing technologies create a false sense of self (and often false self-importance) IMHO. So while such tools may do some impressive things, I am generally philosophically opposed to them. Learning, failing, learning, succeeding, on repeat - is the way to truly claim titles like coder, artist, photographer, etc.
Just my opinion.

5 Likes

Try (some valid area code) 555-[a number between 0100-0199] (Why Do Fake Phone Numbers Start With 555?)…as long as they don’t have an AI (or some minimally competent JS) parsing your response, it’ll probably work. Oh, and be sure to use some nom-de-web and associated toss-away email account, make sure your VPN is running, Private Browsing is enabled, &c &c &c.

“But why would they need your number?”… Accumulating billions of Small Pieces of Data (a phone number, a user name,…) and associating them together by whatever criteria they share (time of day, location, browser fingerprints, OS, anything & everything that’s available; and whether they’re accurate associations or not, who cares?), all stuffed into a database, is a valuable product for sale, because individuals have (minimally, claimed to have) been identified through these associations. (Again, whether it’s accurate or not: who cares?)

This is how we get targeted: one Submit-button-click at a time. And the spam just keeps on coming…the ads never stop…the number of calls to renew your car’s warranty increase…and machine learning and AI do a lot of the work in all this enablement-of-endless-annoyance (and worse).

OTOH, such “play-walls” make me stop and ask myself if complying at all is really worth the trouble. For me, most of the time, not.

And now I click Reply and have no idea what associations will be made from my post, or by who. Oh my goodness, life in The Modern Age.

4 Likes

Totally agree, they don’t need your number, but curiosity won out for me…

I think it is a glimpse on what might be possible in the future. AI is not going to immediately replace any knowledgable human, but there are some tasks it will do much faster, and more efficiently in the future, and promises to greatly amplify a knowledgeable workers productivity. An AI might look at the myriad ways a protein will fold and design a new drug, but we will still need deeply knowledgeable workers to turn that information into a pharmaceutical. Coding can be laborious and might be one of the areas where AI will assist. The interesting thing about this chatbot, is how it interacts with the user, remembers the previous conversation, and has a sense of context, as well as provides sensible answers to many tricky questions. It is more human like in its responses than things we have seen before. We will probably be using these types of neural networks rather than google in the future.
As for comments regarding the privacy issues, 100% agree they don’t need that.

1 Like

I tried it out for a few days out. It produces good results for questions/tasks where the algorithm can pull a lot of data from freely available sources like for the example above. It can by no means develop a creative solution to a complex problem, but it facilitates massively searching for standard solutions. And sometimes it produces horribly wrong results which look correct at a first glance.

I think it has a good side because it takes away a lot of boring work. And it will probably force some teachers to change the kidn of homework they are giving to their students, they will probably need to put some more effort in it :wink: