Robot and Python writing code

Introduction

I, like many others in the tech industry, have found myself on the job hunt. Many technical positions require a code sample/project. There are plenty of reasons to wonder how effectively such projects reflect technical talent, or if such projects require an unfair amount of effort from the applicant… but this is not an article about that. Regardless of personal opinions, these projects provide employers a point-of-reference on a candidate’s technical approach. How well documented is the code? Is there a pattern of separation-of-concerns? Does the API meet the requirements of the project?

For one company, I had an option to share a project, do a live coding interview, or work on a take-home project. I chose the take-home project because:

  • a new project provides the opportunity highlight my fit for the role
  • I wanted to play more with ChatGPT
  • I knew I wanted to write this article.

So, here we are. The requirements for the project were pretty straightforward. “Build a functional REST API for CRUD permissions on a joke of the day.”

Obviously this could be fulfilled in a lot of ways. How we go about providing a REST API is part of the decision. If I were going for the quick turn-around time, PostgREST can spin up a REST API with only a database! A docker-compose file and a SQL script would deliver the requirement!

I wanted to experiment with using ChatGPT to generate the scaffolding for my API, so I opted for a Python FastAPI implementation.

The resulting implementation is available on GitHub.

Getting started

I plugged my requirements into ChatGPT:

Build a functional REST API to CRUD on a Joke of the Day. (JOTD)

Requirements

    Build the application using your favorite language
    The JOTD has required fields of “text” and “date” and an optional field called “description”
    API request and response should be JSON

I was not sure what I expected, but ChatGPT provided a great high-level overview in response:

To build a functional REST API to CRUD on a Joke of the Day (JOTD), you will need to follow these general steps:

    Choose a programming language to build the application. Some popular choices for building REST APIs include Python, Java, and JavaScript.

    Create a JOTD model with the required fields of "text" and "date", and an optional field called "description".

    Implement CRUD endpoints for the JOTD, using standard RESTful conventions (e.g. GET for reading, POST for creating, PUT for updating, and DELETE for deleting).

    Use JSON for the request and response formats for the API.

...
It is important to note that this is a high-level overview of building a REST API, and there will be many details to consider depending on the specific requirements of the project and the programming language you choose.

Not super helpful to implementation, though. So, I asked the machine to joke of the day rest api in python using fastapi. Here’s what we got:

Here is an example of how you could build a Joke of the Day (JOTD) REST API in Python using the FastAPI web framework:

    Install the required packages:

pip install fastapi
pip install pydantic
pip install sqlalchemy
pip install databases

    Create a JOTD model using Pydantic:

from pydantic import BaseModel

class JOTD(BaseModel):
    text: str
    date: str
    description: str = None

    Create a database connection using SQLAlchemy and Databases:

from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from databases import Database

DATABASE_URL = "sqlite:///./jotd.db"

engine = create_engine(DATABASE_URL)
Base = declarative_base()

class JOTD(Base):
    __tablename__ = "jotd"
    id = Column(Integer, primary_key=True, index=True)
    text = Column(String, index=True)
    date = Column(String, index=True)
    description = Column(String)

Base.metadata.create_all(bind=engine)

database = Database(DATABASE_URL)

    Define CRUD endpoints using FastAPI:

from fastapi import FastAPI, HTTPException

app = FastAPI()

@app.post("/jotd", response_model=JOTD)
async def create_jotd(jotd: JOTD):
    query = jotd.insert().values(text=jotd.text, date=jotd.date, description=jotd.description)
    jotd_id = await database.execute(query)
    jotd = await get_jotd(jotd_id)
    return jotd

...

Now this is useful! There are issues, such as poor organization, some structural assumptions, etc… BUT this is a great way to go from 0 to something very quickly.

I thought there was enough here to get started on the project, so, here we go…

Implementation

So, to get started, I created a directory and added an empty README.md and requirements.txt, a src/ subdirectory, and a Dockerfile. I already had a FastAPI Dockerfile from a different project, which looks like:

FROM python:3.10-bullseye

RUN pip install --upgrade pip \
    && apt-get update \
    && apt-get install -y python3-brlapi sqlite3 \
    && adduser --disabled-login --gecos '' api_user

USER api_user
WORKDIR /home/api_user

COPY --chown=api_user:api_user requirements.txt requirements.txt
RUN pip install --user -r requirements.txt
ENV PATH="${PATH}:/home/api_user/.local/bin"

COPY --chown=api_user:api_user src .

CMD ["uvicorn", "main:app", "--host", "0.0.0.0"]

Note that this Dockerfile provisions an api_user so that the API does not run as root. Out of curiosity, I submitted the Dockerfile to ChatGPT3, and here’s what we get:

This is a Dockerfile that can be used to create a containerized version of a Python application. The image is based on the Bullseye version of Python 3.10, and it starts by installing the necessary dependencies such as pip and sqlite3. It then creates a new user called api_user, and sets the working directory to the home directory of the new user. Next it copies the requirements.txt file and installs the dependencies in the container.
It then copies the source code of the application in the container and sets the command to run the application using uvicorn, which is a production-ready ASGI server, and specifies the host to be 0.0.0.0 so that the application can be accessed from outside the container.

LGTM Rambo

Collaborating with the Robot

I asked ChatGPT to generate a README for the jotd API. Here’s the response (formatting removed for convenience).

Joke of the Day (JOTD) REST API

This is a simple REST API for managing Jokes of the Day (JOTD) built using Python, FastAPI, SQLAlchemy and Databases.
Getting Started

These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
Prerequisites

    Python 3.8 or later
    Pipenv

Installing

    Clone the repository

git clone https://github.com/your-username/jotd-api.git

    Install the dependencies

pipenv install

    Run the migrations

pipenv run alembic upgrade head

    Run the development server

pipenv run uvicorn main:app --reload

The API will be available at http://localhost:8000/.
Endpoints

    POST /jotd: Create a new JOTD
    GET /jotd/{id}: Retrieve a specific JOTD by ID
    PUT /jotd/{id}: Update a specific JOTD by ID
    DELETE /jotd/{id}: Delete a specific JOTD by ID
    GET /jotd: Retrieve a list of all JOTDs

Built With

    FastAPI - The web framework used
    SQLAlchemy - Object-Relational Mapper
    Databases - Async ORM

Authors

    Your name - *Initial

Nice job robot!

Robot fistbump

Obviously this will need to be edited and updated, but look how much value we get out of having some basic README elements populated.

Now that I have a README drafted, a Dockerfile in place, and some requirements in my requirements.txt, it’s time to ask for some more code!

Iteration with my AI partner

I knew we wanted CRUD endpoints, so I asked ChatGPT to generate FastAPI CRUD endpoints for jotd. Here’s what we got…

from fastapi import FastAPI, HTTPException
from sqlalchemy.orm import Session

app = FastAPI()

@app.post("/jotd", response_model=JOTD)
def create_jotd(jotd: JOTD, db: Session = Depends(get_db)):
    db.add(jotd)
    db.commit()
    db.refresh(jotd)
    return jotd

@app.get("/jotd/{id}", response_model=JOTD)
def read_jotd(id: int, db: Session = Depends(get_db)):
    jotd = db.query(JOTD).filter(JOTD.id == id).first()
    if jotd is None:
        raise HTTPException(status_code=404, detail="JOTD not found")
    return jotd

Obviously, the imports won’t work but that’s ok. We can wire those in. I am more concerned with having the API endpoints hitting the db object directly. So, I told ChatGPT that I wanted to refactor jotd to follow repository layer conventions.

Here’s what we got:

from typing import List

class JOTDRepository:
    def __init__(self, db: Session):
        self.db = db

    def get(self, id: int) -> JOTD:
        return self.db.query(JOTD).filter(JOTD.id == id).first()

    def get_all(self) -> List[JOTD]:
        return self.db.query(JOTD).all()

    def create(self, jotd: JOTD) -> JOTD:
        self.db.add(jotd)
        self.db.commit()
        self.db.refresh(jotd)
        return jotd

    def update(self, jotd: JOTD) -> JOTD:
        self.db.merge(jotd)
        self.db.commit()
        self.db.refresh(jotd)
        return jotd

    def delete(self, jotd: JOTD):
        self.db.delete(jotd)
        self.db.commit()

@app.post("/jotd", response_model=JOTD)
def create_jotd(jotd: JOTD, db: Session = Depends(get_db)):
    repository = JOTDRepository(db)
    return repository.create(jotd)

@app.get("/jotd/{id}", response_model=JOTD)
def read_jotd(id: int, db: Session = Depends(get_db)):
    repository = JOTDRepository(db)
    jotd = repository.get(id)
    if jotd is None:
        raise HTTPException(status_code=404, detail="JOTD not found")
    return jotd

@app.put("/jotd/{id}", response_model=JOTD)
def update_jotd(id: int, jotd: JOTD, db: Session = Depends(get_db)):
    repository = JOTDRepository(db)
    stored_jotd = repository.get(id)
    if stored_jotd is None:
        raise HTTPException(status_code=404, detail="JOTD not found")
    stored_jotd.text = jotd.text
    stored_jotd.date = jotd.date
    stored_jotd.description = jotd.description
    return repository.update(stored_jotd)

@app.delete("/jotd/{id}", response

Clearly we wouldn’t want the repository in the same file as the routes, but this was actually… helpful.

Learning assistant

I don’t tend to rely heavily on ORMs; and I tend to use language primatives for data validation. That said, my goal here is to leverage ChatGPT to write an API. So, based on this, I decided to refresh my memory on SQLAlchemy. I asked What does sqlalchemy declarative_base do?.

sqlalchemy.ext.declarative.declarative_base() is a function provided by the SQLAlchemy ORM (Object-Relational Mapper) library that creates a base class for declarative models. This base class is then used as a factory for new model classes.

The declarative system allows you to define your database tables and their columns using Python classes, rather than writing raw SQL statements. This makes it easier to maintain and reason about your database schema, and allows for a more pythonic and object-oriented approach to working with databases.

When you create a new class that inherits from the base class returned by declarative_base(), that class will be automatically mapped to a database table, and its attributes will be mapped to columns in that table.

For example, you can define a JOTD model like this:

from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

class JOTD(Base):
    __tablename__ = "jotd"
    id = Column(Integer, primary_key=True, index=True)
    text = Column(String, index=True)
    date = Column(String, index=True)
    description = Column(String, index=True)

This JOTD class will be mapped to a table named jotd in your database and its attributes will be mapped to columns.

When you create an instance of this class and add it to a session, SQLAlchemy will automatically create the corresponding rows in the database table, and when you query the table, SQLAlchemy will automatically create instances of the class and populate them with the data from the rows in the table.

The declarative_base() also allows you to create a single place to define common columns like created_at and updated_at which will be shared across all models.

The #1 rule when working with AI: Trust but verify. Hitting the docs took a little longer to get to the same summary generated by ChatGPT. Thanks, robo pal!

Wiring it together and fixing the style

From my jaunt into the sqlalchemy docs, I knew I wanted to use a sessionmaker. I told ChatGPT to update the repository it created to `add a sessionmaker to the CRUD repository layer:

'''Here is an example of how you can create a CRUD repository module using a SessionMaker in SQLAlchemy:'''

from sqlalchemy.orm import Session
from sqlalchemy.orm import sessionmaker

SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)

class JOTDRepository:
    def __init__(self):
        self.session:Session
    def __enter__(self):
        self.session = SessionLocal()
        return self

    def __exit__(self, exc_type, exc_val, exc_tb):
        self.session.close()
    def get(self, id: int) -> JOTD:
        return self.session.query(JOTD).filter(JOTD.id == id).first()

    def get_all(self) -> List[JOTD]:
        return self.session.query(JOTD).all()

    def create(self, jotd: JOTD) -> JOTD:
        self.session.add(jotd)
        self.session.commit()
        self.session.refresh(jotd)
        return jotd

    def update(self, jotd: JOTD) -> JOTD:
        self.session.merge(jotd)
        self.session.commit()
        self.session.refresh(jotd)
        return jotd

    def delete(self, jotd: JOTD):
        self.session.delete(jotd)
        self.session.commit()

COOL! That will do us for now, so let’s think about our models: make a model for jotd that uses the repository. As expected at this point, ChatGPT generated a reasonable class-based model. I prefer a more functional style. So, I asked it: use a functional python style to make a model using the repository. I was bowled over when this actually worked!

Getting the AI to call my code

From here, I spent a little time making a db.py file for the respository. The elements it needed had already been provided by ChatGPT in the context of the repo module, so after I had it hashed out I showed ChatGPT my db.py module.

here's a python db.py file
'''
Creates databases and connection object
'''

import os

from sqlalchemy import create_engine, Column, Integer, String
from sqlalchemy.orm import declarative_base, sessionmaker

database_url = os.getenv('DATABASE_URL', 'sqlite:///jotd.db')

engine = create_engine(database_url)
Base = declarative_base()


class JOTD(Base):
    ''' Joke of the day DB table '''
    __tablename__ = 'jotd'
    id = Column(Integer, primary_key=True, index=True)
    text = Column(String)
    date = Column(String, index=True)
    description = Column(String)


Base.metadata.create_all(engine)

Session = sessionmaker(bind=engine)

From this input, ChatGPT provided the following description:

This is a Python module that creates a database connection and a JOTD model using SQLAlchemy, an ORM (Object-Relational Mapper) library for Python.

It starts by importing the necessary modules from SQLAlchemy, including the create_engine, Column, Integer, String and declarative_base modules.

Then it sets the database_url variable to either the value of the DATABASE_URL environment variable or a default SQLite URL. This allows the application to be run locally using SQLite or on a remote server using any other database.

The engine variable is then created using the create_engine method, which takes the database_url as a parameter.

A Base variable is defined using the declarative_base method provided by SQLAlchemy. This Base variable will be used as a factory for new model classes.

The JOTD class is defined, which inherits from the Base variable. It has 4 attributes, id, text, date, description which will be mapped to columns in the jotd table when the table is created.

The Base.metadata.create_all(engine) method creates all the tables defined in the model classes that inherit from Base.

Finally, a Session variable is defined using the sessionmaker function provided by SQLAlchemy. This variable is bound to the engine variable and will be used to create a new session for each request.

I literally could not have said it better myself. Now that ChatGPT “knows” about my db.py file, I instructed it: using a functional python style, create a jotd.py repository file that uses db.py.

Here’s the output:

from typing import Tuple, List

from db import Session, JOTD

def create_jotd(text: str, date: str, description: str) -> Tuple[JOTD, int]:
    session = Session()
    jotd = JOTD(text=text, date=date, description=description)
    session.add(jotd)
    session.commit()
    session.refresh(jotd)
    session.close()
    return jotd, HTTP_CREATED

CANCEL! We want to use with syntax! So we tell the robot: using a functional python style, create a jotd.py repository file that uses db.py with Session() as session

'''
Here is an example of how you can use a functional programming style to create a JOTD repository using the JOTD model defined in db.py, with session management:
'''

from typing import Tuple, List

from db import Session, JOTD

def create_jotd(text: str, date: str, description: str) -> Tuple[JOTD, int]:
    with Session() as session:
        jotd = JOTD(text=text, date=date, description=description)
        session.add(jotd)
        session.commit()
        session.refresh(jotd)
    return jotd, HTTP_CREATED

def read_jotd(jotd_id: int) -> Tuple[JOTD, int]:
    with Session() as session:
        jotd = session.query(JOTD).filter(JOTD.id == jotd_id).first()
        if jotd is None:
            return None, HTTP_NOT_FOUND
    return jotd, HTTP_OK

def update_jotd(jotd_id: int, text: str, date: str, description: str) -> Tuple[JOTD, int]:
    with Session() as session:
        jotd = session.query(JOTD).filter(JOTD.id == jotd_id).first()
        if jotd is None:
            return None, HTTP_NOT_FOUND
        jotd.text = text
        jotd.date = date
        jotd.description = description
        session.commit()
    return jotd, HTTP_OK

def delete_jotd(jotd_id: int) -> Tuple[JOTD, int]:
    with Session() as session:
        jotd = session.query(JOTD).filter(JOTD.id == jotd_id).first()
        if jotd is None:
            return None, HTTP_NOT_FOUND
        session.delete(jotd)
        session.commit()
    return jotd, HTTP_OK

'''
This JOTD repository defines the CRUD(Create, Read, Update and Delete) operations for JOTD model and uses the Session object from db.py in a functional way. It opens and closes a new session for each request within the context of with Session() as session: and session.close() respectively.

It also returns HTTP status codes along with the results of operations, which can be useful for handling errors and responses in your application.
'''

Well, that will do! But since this is a joke of the DAY, we need to get the joke by date! I asked ChatGPT update read_jotd to get the jotd by date; it had no difficulties:

def read_jotd_by_date(date: str) -> Tuple[JOTD, int]:
    with Session() as session:
        jotd = session.query(JOTD).filter(JOTD.date == date).first()
        if jotd is None:
            return None, HTTP_NOT_FOUND
    return jotd, HTTP_OK

Generating its own references

I decided to follow its recommended convention of using HTTP codes across layers. This isn’t my -generated code.

I asked ChatGPT to create a globals.py file that hydrates the HTTP codes. It gave me:

'''Here is an example of how you can create a globals.py file that defines the HTTP status codes as global constants:'''

HTTP_OK = 200
HTTP_CREATED = 201
HTTP_NO_CONTENT = 204
HTTP_BAD_REQUEST = 400
HTTP_UNAUTHORIZED = 401
HTTP_FORBIDDEN = 403
HTTP_NOT_FOUND = 404
HTTP_INTERNAL_SERVER_ERROR = 500

'''
You can use these constants in your application to set the HTTP status code of the response. For example, in the create_jotd function of the JOTD repository, you can return jotd, HTTP_CREATED to indicate that the JOTD was successfully created.
You can also add other HTTP status codes that you may need in your application to this file.
It's important to note that this file should be imported and used in all the modules of your application that need to use the HTTP status codes.
'''

Adding exception handling

Since there is so little business logic, we’re going to skip a discussion on the models for this project. I observed that we were lacking error handling on the repository layer. To redress this, without adding a lot of code, we can use a Python decorator. To fast-track the process, I asked ChatGPT: decorator to wrap exception handler for internal server errors.

'''
Here's an example of a Python decorator that can be used to wrap a function and handle internal server errors (HTTP status code 500) that may occur within that function:
'''

from typing import Callable
from fastapi import HTTPException

def handle_exceptions(func: Callable) -> Callable:
    async def wrapper(*args, **kwargs):
        try:
            return await func(*args, **kwargs)
        except Exception as e:
            raise HTTPException(status_code=500, detail=str(e))
    return wrapper

You can then use this decorator to wrap the functions that you want to handle the exception for

@handle_exceptions
async def create_jotd(jotd: JOTDCreate):
    jotd, code = await create_jotd(jotd.text, jotd.date, jotd.description)
    return jotd.id if jotd is not None else None, code

'''
This will catch any exception that occurs in the create_jotd function, and raise a new HTTPException with a status code of 500 and the error message as the detail.
It's also good practice to log the exception message for debugging purposes.
You can use python built-in logging module to log the exception and its traceback.
'''

Of course, AI has its limitations. My implementation ended up looking like:

def internal_err_handler(func):
    '''
    Decorator that handles uncaught errors and returns None and an error code.
    '''
    @wraps(func)
    def wrapper(*args, **kwargs):
        try:
            return func(*args, **kwargs)
        except Exception as ex:
            logging.exception(ex)
            return None, globals.HTTP_INTERNAL_SERVER_ERROR
    return wrapper

FuncTools wraps ensures the name of the function remains referenced by the original function name. Compared to the original implementation, I was suprised the wrapper function did not reference the HTTP_INTERNAL_SERVER_ERROR constant.

Regenerating the routes

Given all the iteration and updates to the repository and models, I thought it would be cool to feed the code back into GPT and have it regenerate the FastAPI routes. After feeding in my model file, ChatGPT replied:

This models.py file defines functions that act as a layer between the API and the repository module, which handles the interactions with the database.

The functions in this file take input in the form of plain Python objects (e.g. JOTDInput) and use the repository functions to perform CRUD operations on the Joke of the Day data.

For example, the create_jotd() function takes a JOTDInput object as input, and passes its properties (text, date, description) to the create_jotd() function in the repository module to create a new Joke of the Day

After this, I wrote: Create a fast API controller that implementes the jotd models

Reply:

'''
Here is an example of how you could create a FastAPI controller that implements the JOTD models:
'''

from fastapi import FastAPI, HTTPException
from pydantic import BaseModel

from models import JOTD, create_jotd, get_jotd_by_id, get_jotd_by_date, update_jotd, delete_jotd

app = FastAPI()

class JOTDInput(BaseModel):
    text: str
    date: str
    description: str = None

@app.post("/jotd/create")
async def create_joke(jotd: JOTDInput):
    new_jotd, code = create_jotd(jotd)
    if code != 201:
        raise HTTPException(status_code=code)
    return {"jotd": new_jotd}

@app.get("/jotd/{jotd_id}")
async def read_joke(jotd_id: int):
    jotd, code = get_jotd_by_id(jotd_id)
    if code != 200:
        raise HTTPException(status_code=code)
    return {"jotd": jotd}

@app.get("/jotd/date/{date}")
async def read_joke_date(date: str):
    jotd, code = get_jotd_by_date(date)
    if code != 200:
        raise HTTPException(status_code=code)
    return {"jotd": jotd}

@app.put("/jotd/{jotd_id}")
async def update_joke(jotd_id: int, jotd: JOTDInput):
    updated_jotd, code = update_jotd(jotd_id, jotd)
    if code != 200:
        raise HTTPException(status_code=code)
    return {"jotd": updated_jotd}

@app.delete("/jotd/{jotd_id}")
async def delete_joke(jotd_id: int):
    _, code = delete_jotd(jotd_id)
    if code != 200:
        raise HTTPException(status_code=code)
    return {"message": "jotd deleted"}

'''
This example defines a FastAPI app and creates several endpoints, each of which corresponds to a CRUD operation for the Joke of the Day data. These endpoints use the functions from the models module to perform the corresponding operations on the Joke of the Day data. The JOTDInput class is used to define the request payloads for the create and update endpoints.
'''

I ended up making a lot of iterations and updates from here, and you can see the final product on github.

Final thoughts and conclusions

This differs from my usual approach to building software. For simple CRUD/POC apps, I tend to minimize dependencies. I probably would have implemented sqlite and passed around dictionaries. The object of these assignments is to assess the efficiency of an engineer, as well as style. I like necessary-and-sufficient dependency management, etc. By implementing the AI-generated code, I feel like I was able to deliver a more feature-complete solution than I would have been able to with my usual approach.

This is far from a “perfect” solution. I would go back and add ISO-8601 format enforcement for the date field; I would add some business logic around jokes and whether they can be duplicated/associated with multiple dates. I don’t especially love using HTTP codes in the repository layer. Despite all of these issues, we were able to quickly generate a working API that satisfied all of the requirements. The solution is portable and can connect to different databases. All-in-all I would call this iterative development experience a big win.

So far, AI tools show some good generative capacity but lack the ability to string the snippets into a bigger picture. Being able to quickly template and generate code which I can modify to my needs enables me to move a lot faster.

Chat GPT’s final thoughts

Finally, I asked ChatGPT to write an article about using ChatGPT to write a REST API. Here’s what we got.

Recently, the field of natural language processing has seen a significant advancement with the release of GPT-3 (Generative Pre-trained Transformer 3) by OpenAI. GPT-3 is a language model that can generate human-like text based on a given prompt, making it a powerful tool for various natural language processing tasks.

One of the areas where GPT-3 can be particularly useful is in the development of REST APIs. REST (Representational State Transfer) APIs are a way for different systems to communicate with each other over the internet, and are commonly used for web-based applications.

One way that GPT-3 can help in the development of a REST API is by generating code snippets for various parts of the API. For example, it can generate code for the routes, models, and controllers of an API. This can significantly speed up the development process as it eliminates the need to manually write repetitive code.

Another way that GPT-3 can help in the development of a REST API is by providing suggestions for best practices and design patterns. For example, it can suggest how to structure the API's routes, how to handle errors and exceptions, and how to implement security features. This can be particularly useful for developers who are new to API development, as it can help them to learn and understand the best practices quickly.

GPT-3 can also be used to generate documentation for the API. This can include documentation for the API's endpoints, parameters, and responses, as well as examples of how to use the API. This can save a lot of time and effort for the developers, as they do not have to manually write the documentation.

In addition to these benefits, GPT-3 can also be used to generate test cases for the API. This can help to ensure that the API is working correctly and that it is robust enough to handle different types of inputs and edge cases.

In conclusion, GPT-3 can be a powerful tool for the development of REST APIs in Python. It can generate code snippets, provide suggestions for best practices and design patterns, generate documentation, and generate test cases. This can significantly speed up the development process and help to ensure that the API is robust and well-documented. However, it is important to note that GPT-3 is not a substitute for a developer's expertise, but rather a tool to help them in their work.