ELLYPSIS
    AI Concepts Explained

    Where Should a Small Business Start with AI?

    Sewar Sidou5 min read

    Most small businesses should start AI with one repetitive, well-defined process — not a tool. Here's the diagnostic and what the first 30 days actually look like.

    Most small businesses should start AI with a single internal process: one that is repetitive, runs on predictable inputs, and currently occupies someone whose time is worth more than the task. Starting with a specific workflow rather than a tool category cuts the failure rate and typically produces measurable results within 30 days.

    The wrong question is "which tool should we use?"

    Starting with tool selection is the most reliable way to spend four months and produce nothing useful.

    The pattern repeats itself. A leadership team reads something about AI, books a demo, buys licences, and runs a kickoff meeting. Six months later, the tool is open in two tabs and ignored everywhere else. McKinsey's 2025 State of AI report found that only 29% of companies with revenues under $100M have reached the scaling phase with AI, compared to 47% of companies with revenues above $5B. The gap is not access to tools. Both groups have the same tools available.

    The difference is whether workflow redesign happened. Buying software is easy. Changing how work actually gets done is the project.

    Start with a problem. The tool comes last.

    Which processes actually qualify?

    A process is ready for AI when it meets three conditions: it is repetitive, it has clear and consistent inputs, and its output can be verified.

    "Repetitive" means it happens more than once a week and follows roughly the same steps each time. "Clear inputs" means someone could hand it off on day one with a written description. "Verifiable output" means you can tell, within a reasonable time, whether the result is right or wrong.

    The processes that fail this test almost always fail on the third point. If you cannot verify the output, you cannot trust the automation. And if you cannot trust it, someone ends up checking every result manually — which means the time saved was zero.

    In practice, the first question in every AI assessment we run is: what takes the most time that has the least variation? That intersection is where AI earns its cost.

    What the first 30 days look like

    A realistic first implementation cycle runs four weeks. Two weeks mapping and testing. Two weeks deploying in a sandbox before anything touches production.

    One of our first implementations was for a company that needed to improve their lead generation and prospecting process. The work was not about making sales calls better. It was about finding the right companies, qualifying them against specific criteria, and getting clean records into the CRM. That sounds simple until you sit with the person doing it and watch how many judgment calls happen in a process that nobody wrote down.

    We ran a shadowing session. One or two working sessions where we observed the process as it actually ran, asked about every decision point, and mapped what made something a qualified lead versus one to skip. From that, we built a tiered framework: what a strong lead looks like, what a weak one looks like, and what falls outside scope entirely. That framework became the verification layer the AI worked against.

    The sandbox piece is worth its own emphasis. The goal is not to ship slowly. The goal is to ship fast and modular, and run the system in parallel against real inputs without writing to the actual data source. The person who owns the process reviews the outputs, flags the errors, and confirms the tier calls are right. Only once that review period closes does the system touch production. That period catches the edge cases the shadowing session missed. There are always some.

    The mistake that kills most first attempts

    The most common failure in a first AI implementation is not the technology. It is implementing AI on top of a broken process.

    If a process has undocumented exceptions, informal workarounds, and rules that live only in one person's head, AI will not fix that. It will fail on the exceptions and produce outputs that need constant correction. The process gets abandoned, or worse, runs in parallel with manual checking — two processes instead of one.

    Document the process as it actually runs before you automate any part of it. Map the workarounds. Understand why they exist. Some are there for a reason that matters when something goes wrong. Automating over them means that reason disappears, and the failure arrives later without explanation.

    Fix the process first. Then automate.


    Frequently Asked Questions

    How much does it cost to start using AI in a small business?

    Costs vary depending on scope. Using AI tools through existing software like Microsoft 365 Copilot runs around $30 per user per month. A structured AI implementation with an external consultant typically starts at €1,500 and increases with the complexity of the process being redesigned. The first project almost always costs less than the time currently being spent on the process it replaces.

    Do I need a dedicated IT team to implement AI in my company?

    No. Most first implementations at small companies involve no custom development. The tools are configured, not coded. What you need is someone with enough time and authority to map the process being changed and manage the transition period — typically four to six weeks of supervised deployment. The bottleneck is almost never technical.

    How long before we see results from AI implementation?

    A well-scoped first implementation typically shows measurable results in four to six weeks. That assumes the process is well-documented going in. If documentation work is needed first, add two to four weeks. Projects that have not shown results after three months usually started without a clear process definition, not a tool problem.

    What is the difference between using ChatGPT and implementing AI in my business?

    Using ChatGPT is ad-hoc. Implementation is systematic. With ChatGPT, a person uses the tool when they remember to. With implementation, the AI is built into the workflow so it runs every time the process runs, with consistent inputs, monitored outputs, and a defined path when something falls outside normal range. One is a helpful tool. The other changes how the work gets done.

    Which department should implement AI first?

    Whichever department has the most time-consuming process with the most predictable inputs. In practice, this is often finance, operations, or customer support, because these functions run on structured data and repetitive tasks. Avoid starting with anything requiring nuanced judgment or with high stakes for error. Legal review and performance evaluation are not good first projects.


    Sewar Sidou is the founder of Ellypsis, an AI implementation consultancy for Danish SMEs and mid-market companies. Ellypsis runs AI Potential Assessments, builds implementations, and runs workshops for companies that are ready to move past awareness into actual results. More at ellypsis.dk.

    Wollen Sie das in die Praxis umsetzen?

    Buchen Sie ein kostenloses Gespräch und finden Sie heraus, wo KI in Ihren Betrieb passt.

    Sprechen wir