Multi-track submission
Tracks with their own keywords, chairs, and review pools. Paper codes auto-generated per conference (IPCME2026-001, -002…).
Sidang handles the full paper lifecycle — abstract, review, revision, payment, camera-ready, proceedings — on one tenant-scoped platform. Built for the WoS/Scopus-tier cadence, not the Google-Forms improvisation.
Six years of running conferences distilled into one opinionated platform. Each tenant gets their own subdomain, their own branding, their own team — without reinventing the workflow each edition.
Tracks with their own keywords, chairs, and review pools. Paper codes auto-generated per conference (IPCME2026-001, -002…).
Anonymized PDFs, per-paper reviewer rounds, structured scoring on originality / technical / clarity / significance with confidence.
The Model B international standard. Auto invoice (PDF), 14-day clock, proof upload, secretariat verification, receipt — with a strict drop policy for non-payment.
Final PDF + copyright form upload. Secretariat formatting checks. Certificates auto-generated for presenters, reviewers, and chairs.
Run a series, an annual, and a satellite event from one account. Each on its own subdomain; no cross-tenant data leakage, enforced at query level.
Every state change, upload, email, and verification is logged. PDPA-aware data export and deletion flows. Double-blind enforced at the render layer.
The editorial lifecycle Sidang enforces. Each transition fires a typed event — emails, invoices, audit logs — so your team never chases handoffs.
Author submits with track + co-authors.
Track chair accepts or rejects.
Anonymized PDF uploaded.
2-3 reviewers; structured scoring.
Final acceptance triggers invoice.
14 days, proof upload, verified.
Camera-ready approved & indexed.
What running a 500-paper conference on email, Google Forms, and a shared drive actually looks like — and what Sidang replaces it with.
We're partnering with a small cohort of academic organisers to refine Sidang with real workflows. You get a bespoke deployment, we get direct feedback. No lock-in, full data export.
Authors upload an anonymized PDF for review; reviewer identities are stripped from the feedback surfaced to authors ("Reviewer 1", "Reviewer 2" rather than names). Confidential chair-only comments are kept separate. The PDF viewer layer enforces this — it doesn't rely on reviewers remembering.
Yes, optionally, per tenant. Spaces is primary and source of truth; Drive mirror is asynchronous and best-effort so Drive rate limits never slow your authors down. You pick which categories to mirror — submissions, payments, reviews, certificates — and the refresh token is encrypted at rest.
v1 uses the international-standard Model B: pay after acceptance. Auto-invoice on final acceptance (Email 2), 14-day clock (configurable), author uploads bank proof, secretariat verifies in the dashboard, receipt PDF emailed. If unpaid by day 14, the paper is dropped from proceedings — no grace. Payment gateways (Billplz / ToyyibPay / Stripe) arrive in v2; the abstraction is already in place.
The data model does — reviewers can express bids (want / willing / neutral / avoid / conflict) per paper. The self-service bidding UI is on the v2 roadmap; in v1, chairs assign reviewers manually via the dashboard. COI-declared conflicts are surfaced at assignment time.
The editorial workflow and audit trail are built to satisfy indexing requirements. Metadata export (OAI-PMH) and DOI/Crossref assignment ship in v2. v1 proceedings are generated from your own camera-ready PDFs; we don't lock the content.
Built in Malaysia. Default deployment is a Singapore / Malaysia VPS with DO Spaces in SGP1. You can bring-your-own infrastructure — we'll deploy there for enterprise tenants.
Pilots include a migration consult. Anything in a spreadsheet can typically be imported — papers, authors, reviewers, and their assignments — before your next submission window opens.