M-Tech Labs AI
Eastbourne · UK
/ AI Consultancy

The groundwork that makes AI safe to turn on.

Copilot, agents and assistants are only as trustworthy as the data, permissions and policies underneath them. We help you get that foundation right — drawing on the governance and compliance depth of M-Tech Systems.

01/ Starting points

You might be here because…

Most AI consultancy engagements start with one of these. If any sound familiar, there's a conversation to have.

  1. 01

    Copilot licences on hold

    You've bought (or are about to buy) Microsoft 365 Copilot, but nobody's comfortable flipping the switch until someone's looked at what it can actually see.

  2. 02

    Shadow AI is already happening

    Staff are pasting client data into ChatGPT, Gemini and whatever's in their browser — with no policy, no audit trail and no sign-off.

  3. 03

    SharePoint permissions are a liability

    Fifteen years of "share with everyone" links, inherited folders and orphaned groups. Fine until an LLM gets a search API over the top of it.

  4. 04

    A regulator, auditor or client has asked

    You need a defensible answer on how AI is used, where the data goes, and what controls exist — and "we're thinking about it" isn't going to cut it.

02/ Capabilities

Where we help.

Short, focused engagements that leave you with a working map of risk, a cleaned-up tenancy and a policy your teams will actually follow.

03/ Engagement formats

Three ways to start.

Most clients begin with a Readiness Review, then move into a Sprint or a Retainer once the shape of the work is clear.

2 weeks · fixed fee

Readiness Review

A focused audit of your tenancy, permissions and data classification posture against the AI tool you're about to deploy. Ends with a prioritised remediation plan and a go / no-go recommendation.

  • Tenancy & oversharing audit
  • Risk register with severity
  • Remediation roadmap
  • Exec-level readout
4–6 weeks

Governance Sprint

We work alongside your team to actually fix the issues a review uncovers — sensitivity labels, DLP policies, retention, identity hygiene — and stand up the policy and training that keeps it from drifting back.

  • Purview / sensitivity labels
  • DLP & retention policies
  • Permission remediation
  • AI acceptable-use policy
Ongoing · quarterly

Advisory Retainer

A standing advisory relationship for teams rolling out AI over months, not weeks. Quarterly reviews, change-assessment on new tools, regulator-ready documentation kept current.

  • Quarterly posture review
  • New-tool assessments
  • Policy version control
  • Incident support on call
04/ Outcomes

What a readiness engagement changes.

  1. 01

    Confidence to turn Copilot on

    You know exactly what it can see, who can see it, and what it would never surface. No nasty demos.

  2. 02

    A defensible audit trail

    When the regulator, the auditor or the board asks how AI is being used — you have answers, not a shrug.

  3. 03

    Fewer surprises at rollout

    Permission and labelling problems are caught before deployment, not during a live incident.

  4. 04

    Teams that know the rules

    Clear, short guidance on what's fine, what needs review, and what's off-limits — so adoption doesn't stall on fear.

05/ Why us

The groundwork most AI shops don't do.

AI consultancy from a team that already runs production security and compliance — not a dev firm that picked up governance as an afterthought.

  1. 01

    Governance depth, not just dev depth

    Most software firms can build you an AI assistant. Very few can tell you whether it's safe to plug it into your tenancy. We do both.

  2. 02

    Assurix + NCSC CAF 4.0 aligned

    The same controls we apply to AI rollout — privileged access, supplier risk, monitoring, incident response — are the ones Assurix verifies live against CAF 4.0.

  3. 03

    Backed by a working MSP

    M-Tech Systems runs production identity, security and compliance for the organisations we build for. The AI work sits on top of an assurance practice, not alongside it.

/ Frameworks & standards

We map AI use against the frameworks your auditor already knows.

UK GDPRICO AI guidanceNCSC CAF 4.0NCSC AI principlesISO 27001Cyber EssentialsAssurixMicrosoft PurviewEntra IDNIST AI RMFEU AI Act
/ Start a conversation

Not sure if your tenancy is ready for Copilot?

A two-week readiness review tells you where the risks are, what to fix first and what 'good' looks like — before you spend on licences.