Dispatch Goods – Internal Operations Platform
Internal operations platform for Dispatch Goods with automated QuickBooks invoice sync, SMS notifications, and scheduled data processing.
Context
Dispatch Goods operates a reusable container logistics business. The operations team needed internal tooling to manage daily workflows without relying on developers for routine tasks.
This is not a customer-facing product—it's internal infrastructure that the operations team uses every day. When this system has issues, containers don't get tracked, invoices don't sync, and customers don't get notified.
Problem
Operations relied on:
- Manual data entry across multiple systems
- Copy-pasting between QuickBooks and spreadsheets
- Ad-hoc scripts that broke when data formats changed
- No visibility into what ran, when, or why it failed
The non-negotiables:
- Reliability — system must run without developer intervention
- Idempotent — same operation can run multiple times safely
- Fault-tolerant — one bad row can't break the entire batch
- Trustworthy — non-technical users must be able to rely on the results
Architecture
Three integrated subsystems:
- QuickBooks Online Integration — Two-way sync for invoices and customer data
- Heymarket SMS Integration — Operational notifications and customer communication
- Scheduled CSV Processing — Admin-configured automated data imports
Key Design Decisions
Admin-Configured Scheduling
Operations staff configure their own schedules without touching code:
class ScheduledCSVJob(models.Model):
name = models.CharField(max_length=100)
source_path = models.CharField() # S3 or SFTP path
schedule = models.CharField() # Crontab expression
processor_type = models.CharField() # Which processor to run
enabled = models.BooleanField(default=True)
last_run = models.DateTimeField(null=True)
last_status = models.CharField(default="pending")
Celery Beat picks up these configurations and runs jobs at the specified times.
Idempotent Processing
Every scheduled run is designed to be safely retryable:
def process_csv_job(job_id: str, run_id: str):
job = ScheduledCSVJob.objects.get(id=job_id)
# Check if this exact run was already processed
existing = ProcessingRun.objects.filter(
job=job,
run_id=run_id,
status="completed"
).exists()
if existing:
logger.info(f"Run {run_id} already completed, skipping")
return
# Process with row-level error isolation
for row in read_csv(job.source_path):
try:
process_row(row, job)
except Exception as e:
log_row_error(row, e)
continue # Don't fail the whole batch
Automatic Retry on Failure
External APIs fail. The system handles it with automatic retries and backoff:
@celery_app.task(
bind=True,
max_retries=3,
default_retry_delay=300, # 5 minutes
autoretry_for=(QuickBooksAPIError, HeymarketAPIError)
)
def sync_invoice_to_quickbooks(self, invoice_id: str):
try:
invoice = Invoice.objects.get(id=invoice_id)
quickbooks_client.create_or_update(invoice)
except QuickBooksRateLimitError:
# Back off longer for rate limits
raise self.retry(countdown=600)
QuickBooks Integration
The Problem: The operations team was manually creating invoices in QuickBooks for every new order – a tedious, error-prone process.
The Solution: A scheduled job (cronjob) monitors for new orders and automatically creates corresponding invoices in QuickBooks:
- Automatic Invoice Creation — When a new order is created in our system, the cronjob detects it and creates the invoice in QuickBooks automatically
- Two-Way Sync — Payment status syncs back from QuickBooks via webhooks
- Conflict Handling — QuickBooks is the source of truth for payment information
This eliminated hours of manual data entry and removed the possibility of human error in invoice creation.
SMS Notifications (Heymarket)
Customers receive SMS updates for operational events:
def notify_customer(customer_id: str, event_type: str, context: dict):
customer = Customer.objects.get(id=customer_id)
template = get_sms_template(event_type)
heymarket_client.send_sms(
to=customer.phone,
message=template.render(context),
idempotency_key=f"{customer_id}:{event_type}:{context.get('order_id')}"
)
Failure Modes Handled
| Failure Mode | Handling |
|---|---|
| Malformed CSV row | Skip row, log error, continue processing |
| QuickBooks API down | Queue for retry with exponential backoff |
| Heymarket rate limit | Backoff and retry, alert if persistent |
| Duplicate schedule run | Idempotency check prevents double-processing |
| Missing required field | Validation error logged, row skipped |
Frontend Integration
The operations platform includes internal dashboards built with React and Angular:
- Job Scheduler Dashboard — Non-technical users configure and monitor scheduled processing
- Execution Status Panel — Real-time visibility into running jobs, successes, and failures
- QuickBooks Sync Monitor — View sync history, pending items, and conflict resolution
- Error Review Interface — Operations staff can review and retry failed items without developer help
Design philosophy:
- Backend remains the source of truth for all operations
- Frontend validates but never assumes correctness
- Optimized for daily operational use, not demos
- Backend-enforced permissions and guardrails
Outcome
- Zero developer involvement for routine operations since launch
- Processes 200+ CSV rows daily with row-level error isolation
- QuickBooks sync runs automatically every 15 minutes
- Failed jobs retry without manual intervention
- Operations team has full visibility into job status and errors
This is production infrastructure that runs every day. The measure of success isn't feature count—it's that operations staff trust it to work.