KeikenV Service Health

Last update: 2025-12-14 06:26:46

Platform Overview

Healthy26
Cloudflare Issues0
Other Errors0
Down0

Maintenance Notes

The Keiken system is currently updating its master service list. Some services may be temporarily unavailable during this process. We apologize for any inconvenience caused and appreciate your patience. We will update maintenance messaages when services are configured and operationl.

Active Alerts

All services are healthy.

Service Health

  • Feature-rich web UI for Stable Diffusion with extensive model support and extensions
    Latency: 83 msChecked: 2025-12-14 06:26:46
  • Travis Avatar Demo pipeline orchestrating persona, TTS, and animation services
    Latency: 34 msChecked: 2025-12-14 06:26:46
  • ComfyUIhealthy
    Node-based UI for Stable Diffusion with advanced image generation workflows
    Latency: 35 msChecked: 2025-12-14 06:26:46
  • Coqui TTShealthy
    High-quality text-to-speech service with XTTS v2 model for natural voice synthesis
    Latency: 40 msChecked: 2025-12-14 06:26:46
  • Flowisehealthy
    No-code agent builder for creating LLM workflows and conversational AI applications
    Latency: 33 msChecked: 2025-12-14 06:26:45
  • AI-powered deep research agent using web search and local LLMs for comprehensive reports
    Latency: 36 msChecked: 2025-12-14 06:26:46
  • Grafanahealthy
    Open-source observability platform for real-time dashboards, alerting, and data visualization across multiple sources
    Latency: 54 msChecked: 2025-12-14 06:26:46
  • InvokeAIhealthy
    Professional AI image generation tool with creative control and batch processing
    Latency: 34 msChecked: 2025-12-14 06:26:46
  • Langfusehealthy
    LLM observability and analytics platform for tracking model usage and performance
    Latency: 38 msChecked: 2025-12-14 06:26:45
  • MLflowhealthy
    Experiment tracking server and artifact store for model runs
    Latency: 2 msChecked: 2025-12-14 06:26:46
  • Prometheus metrics exporter for MLflow experiments
    Latency: 2 msChecked: 2025-12-14 06:26:46
  • Automated model evaluation and ranking system for comparing LLM performance
    Latency: 36 msChecked: 2025-12-14 06:26:46
  • Autonomous HuggingFace model search and validation service for finding candidate models
    Latency: 34 msChecked: 2025-12-14 06:26:46
  • MuseTalkhealthy
    Fast portrait animation with 10x faster inference and excellent lip-sync quality
    Latency: 45 msChecked: 2025-12-14 06:26:46
  • MyResumohealthy
    AI-powered resume customization and optimization service with LaTeX template support
    Latency: 38 msChecked: 2025-12-14 06:26:45
  • n8nhealthy
    Workflow automation platform for connecting AI services, databases, and external APIs
    Latency: 42 msChecked: 2025-12-14 06:26:45
  • Neo4jhealthy
    Graph database for managing knowledge graphs and relationship-based data structures
    Latency: 37 msChecked: 2025-12-14 06:26:46
  • Ollamahealthy
    Local LLM inference server providing API access to various language models
    Latency: 34 msChecked: 2025-12-14 06:26:45
  • Model Context Protocol server for persistent memory and context management
    Latency: 38 msChecked: 2025-12-14 06:26:46
  • OpenWebUIhealthy
    AI chat interface powered by Ollama with multi-model support and conversation management
    Latency: 36 msChecked: 2025-12-14 06:26:45
  • Perplexicahealthy
    AI-powered search engine combining web search with LLM reasoning capabilities
    Latency: 48 msChecked: 2025-12-14 06:26:46
  • SadTalkerhealthy
    Portrait animation service generating talking head videos from images and audio
    Latency: 74 msChecked: 2025-12-14 06:26:46
  • SearxNGhealthy
    Privacy-respecting metasearch engine aggregating results from multiple search providers
    Latency: 41 msChecked: 2025-12-14 06:26:46
  • Real-time health monitoring dashboard for all KeikenV platform services
    Latency: 36 msChecked: 2025-12-14 06:26:46
  • Supabasehealthy (auth challenge)
    Open-source Firebase alternative providing database, authentication, and storage services
    Latency: 36 msChecked: 2025-12-14 06:26:45
  • LLM fine-tuning job runner (REST scheduler) for Unsloth pipeline
    Latency: 2 msChecked: 2025-12-14 06:26:46

Recent Incidents

  • OpenMemory MCPRecovered2025-12-14 03:30:16
  • OpenMemory MCPdown (HTTPSConnectionPool(host='openmemory.nebarisoftware.com', port=443): Read timed out. (read timeout=3))2025-12-14 03:30:10
  • OllamaRecovered2025-12-13 19:51:38
  • Ollamadown (HTTPSConnectionPool(host='ollama.nebarisoftware.com', port=443): Read timed out. (read timeout=3))2025-12-13 19:51:31
  • OpenWebUIRecovered2025-12-13 18:11:29
  • OpenWebUIdown (HTTPSConnectionPool(host='openwebui.nebarisoftware.com', port=443): Read timed out. (read timeout=3))2025-12-13 18:11:23
  • SadTalkerRecovered2025-12-13 11:46:42
  • SadTalkerdown (HTTPSConnectionPool(host='sadtalker.nebarisoftware.com', port=443): Read timed out. (read timeout=3))2025-12-13 11:46:35

Latency Trends

Latest average: 37 msBest: 37 ms · Worst: 50 ms