Amazon Lightsail 기반 OpenClaw 출시 – 자율 프라이빗 AI 에이전트를 실행해보세요! | Amazon Web Services
Blog

Amazon Lightsail 기반 OpenClaw 출시 – 자율 프라이빗 AI 에이전트를 실행해보세요! | Amazon Web Services

@channyun
2026.03.09
·Service·by 권준호
#Agent#AI#AWS#Lightsail#OpenClaw

Key Points

  • 1Amazon Lightsail now offers pre-configured OpenClaw instances, an open-source, self-hosted autonomous private AI agent designed to simplify deployment and securely provide personal digital assistance.
  • 2Users can easily launch an OpenClaw instance, pair their browser, and activate Amazon Bedrock integration through a provided script, enabling immediate AI chat and connection to messaging apps like WhatsApp and Telegram.
  • 3Key considerations for OpenClaw on Lightsail include customizable IAM permissions, a pay-as-you-go cost model for both the instance and Bedrock usage, and vital security recommendations for protecting the gateway.

The paper announces the general availability of OpenClaw on Amazon Lightsail, offering a simplified and secure method for users to deploy and manage their private, self-hosted, autonomous AI agents. OpenClaw functions as a personal digital assistant, capable of running directly on a user's computer, integrating with various messaging applications (e.g., WhatsApp, Discord, Telegram), and performing complex tasks such as email management, web browsing, and file organization, beyond mere question-answering.

The core methodology for deploying OpenClaw on Amazon Lightsail is streamlined. Users initiate the process by navigating to the Amazon Lightsail console. From there, they select "Create instance," specify their preferred AWS Region and Availability Zone, choose the Linux/Unix platform, and critically, select "OpenClaw" from the available blueprints. An instance plan, with a recommended 4GB memory plan for optimal performance, is then chosen, followed by instance naming and creation.

Upon successful instance launch, the setup proceeds in two main phases: browser pairing and AI model activation.

  1. Browser Pairing: This establishes a secure connection between the user's browser session and the OpenClaw instance. The user accesses a browser-based SSH terminal via the Lightsail "Getting Started" tab. This terminal displays the OpenClaw dashboard URL and a unique security credential, termed an "access token." The user copies these details, opens the dashboard in a new browser tab, and pastes the access token into the designated "Gateway Token" field. Final authorization for device pairing is provided by responding to prompts within the SSH terminal (typically by pressing 'y' then 'a'). Successful pairing is confirmed by an "OK" status on the OpenClaw dashboard.
  2. AI Model Activation: Lightsail OpenClaw instances are pre-configured to leverage Amazon Bedrock as their primary AI model provider. To activate Bedrock API access, the user copies a provided script from the Lightsail "Getting Started" tab and executes it within an AWS CloudShell terminal. Once this script completes, the AI assistant is fully functional and accessible for chat via the OpenClaw dashboard. Further integration with messaging applications is supported, with detailed instructions available in the Amazon Lightsail user guide.

Key considerations highlighted include:

  • Permissions: OpenClaw instances are granted AWS IAM permissions via an IAM role created by the installation script, which includes policies for Amazon Bedrock access. These permissions are customizable, though modifications should be done cautiously to avoid disrupting AI response generation.
  • Cost: Users incur on-demand hourly charges for their chosen Lightsail instance plan. Interactions with the AI assistant are token-based and processed via Amazon Bedrock, leading to token-based pricing. Utilizing third-party models from AWS Marketplace (e.g., Anthropic Claude, Cohere) may incur additional software fees beyond the per-token costs.
  • Security: Robust security practices are emphasized. It is recommended to conceal the OpenClaw gateway from public internet exposure. The gateway authentication token is treated as a password, with recommendations for frequent rotation and storage in environment files rather than hardcoding in configuration files.