2026-04-29 11:56:32,430 WARNING [20260429_114658_a3d033] root: Session summarization failed after 3 attempts: Request timed out.
Traceback (most recent call last):
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 101, in map_httpcore_exceptions
    yield
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 394, in handle_async_request
    resp = await self._pool.handle_async_request(req)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 256, in handle_async_request
    raise exc from None
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 236, in handle_async_request
    response = await connection.handle_async_request(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/httpcore/_async/connection.py", line 103, in handle_async_request
    return await self._connection.handle_async_request(request)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/httpcore/_async/http11.py", line 136, in handle_async_request
    raise exc
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/httpcore/_async/http11.py", line 106, in handle_async_request
    ) = await self._receive_response_headers(**kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/httpcore/_async/http11.py", line 177, in _receive_response_headers
    event = await self._receive_event(timeout=timeout)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/httpcore/_async/http11.py", line 217, in _receive_event
    data = await self._network_stream.read(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/httpcore/_backends/anyio.py", line 32, in read
    with map_exceptions(exc_map):
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/contextlib.py", line 158, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
    raise to_exc(exc) from exc
httpcore.ReadTimeout

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1630, in request
    response = await self._send_request(
               ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_client.py", line 829, in _send_request
    return await self._send_with_auth_retry(request, stream=stream, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_client.py", line 807, in _send_with_auth_retry
    response = await super()._send_request(request, stream=stream, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1553, in _send_request
    return await self._client.send(request, stream=stream, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/httpx/_client.py", line 1629, in send
    response = await self._send_handling_auth(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/httpx/_client.py", line 1657, in _send_handling_auth
    response = await self._send_handling_redirects(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/httpx/_client.py", line 1694, in _send_handling_redirects
    response = await self._send_single_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/httpx/_client.py", line 1730, in _send_single_request
    response = await transport.handle_async_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 393, in handle_async_request
    with map_httpcore_exceptions():
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/contextlib.py", line 158, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/httpx/_transports/default.py", line 118, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.ReadTimeout

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/tmp/hermes-agent/tools/session_search_tool.py", line 226, in _summarize_session
    response = await async_call_llm(
               ^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/agent/auxiliary_client.py", line 3729, in async_call_llm
    await async_fb.chat.completions.create(**fb_kwargs), task)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/resources/chat/completions/completions.py", line 2714, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1913, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1648, in request
    raise APITimeoutError(request=request) from err
openai.APITimeoutError: Request timed out.
2026-04-29 12:05:32,124 ERROR gateway.platforms.weixin: [Weixin] Session expired; pausing for 10 minutes
2026-04-29 12:42:36,119 ERROR asyncio: Task exception was never retrieved
future: <Task finished name='Task-8' coro=<Application._handle_exception.<locals>.in_term() done, defined at /tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py:1018> exception=OSError(5, 'Input/output error')>
Traceback (most recent call last):
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/tasks.py", line 649, in sleep
    return await future
           ^^^^^^^^^^^^
asyncio.exceptions.CancelledError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/runners.py", line 190, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 641, in run_until_complete
    self.run_forever()
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 608, in run_forever
    self._run_once()
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 1936, in _run_once
    handle._run()
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/events.py", line 84, in _run
    self._context.run(self._callback, *self._args)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 1226, in _poll_output_size
    await asyncio.sleep(interval)
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/tasks.py", line 651, in sleep
    h.cancel()
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/events.py", line 151, in cancel
    def cancel(self):
    
  File "/tmp/hermes-agent/cli.py", line 11024, in _signal_handler
    raise KeyboardInterrupt()
KeyboardInterrupt

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 1026, in in_term
    await _do_wait_for_enter("Press ENTER to continue...")
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 1545, in _do_wait_for_enter
    await session.app.run_async()
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 886, in run_async
    return await _run_async(f)
           ^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 739, in _run_async
    self._redraw()
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 543, in _redraw
    self.context.copy().run(run_in_context)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 526, in run_in_context
    self.renderer.render(self, self.layout)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/renderer.py", line 726, in render
    output.flush()
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/output/vt100.py", line 706, in flush
    flush_stdout(self.stdout, data)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/output/flush_stdout.py", line 37, in flush_stdout
    stdout.flush()
OSError: [Errno 5] Input/output error
2026-04-29 14:25:11,783 ERROR asyncio: unhandled exception during asyncio.run() shutdown
task: <Task finished name='Task-1' coro=<Application.run_async() done, defined at /tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py:620> exception=OSError(5, 'Input/output error')>
Traceback (most recent call last):
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/runners.py", line 190, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 641, in run_until_complete
    self.run_forever()
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 608, in run_forever
    self._run_once()
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 1936, in _run_once
    handle._run()
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/events.py", line 84, in _run
    self._context.run(self._callback, *self._args)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 714, in read_from_input_in_context
    context.copy().run(read_from_input)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 690, in read_from_input
    keys = self.input.read_keys()
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/input/vt100.py", line 97, in read_keys
    self.vt100_parser.feed(data)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/input/vt100_parser.py", line 222, in feed
    for i, c in enumerate(data):
                ^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/cli.py", line 10994, in _signal_handler
    def _signal_handler(signum, frame):
    
  File "/tmp/hermes-agent/cli.py", line 11024, in _signal_handler
    raise KeyboardInterrupt()
KeyboardInterrupt

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 746, in _run_async
    result = await f
             ^^^^^^^
asyncio.exceptions.CancelledError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 751, in _run_async
    self._redraw(render_as_done=True)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 543, in _redraw
    self.context.copy().run(run_in_context)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 524, in run_in_context
    self.renderer.render(self, self.layout, is_done=render_as_done)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/renderer.py", line 726, in render
    output.flush()
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/output/vt100.py", line 706, in flush
    flush_stdout(self.stdout, data)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/output/flush_stdout.py", line 37, in flush_stdout
    stdout.flush()
OSError: [Errno 5] Input/output error

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 886, in run_async
    return await _run_async(f)
           ^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 756, in _run_async
    self.renderer.reset()
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/renderer.py", line 429, in reset
    self.output.flush()
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/output/vt100.py", line 706, in flush
    flush_stdout(self.stdout, data)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/output/flush_stdout.py", line 37, in flush_stdout
    stdout.flush()
OSError: [Errno 5] Input/output error
2026-04-29 14:33:55,912 ERROR asyncio: unhandled exception during asyncio.run() shutdown
task: <Task finished name='Task-1' coro=<Application.run_async() done, defined at /tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py:620> exception=OSError(5, 'Input/output error')>
Traceback (most recent call last):
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/runners.py", line 190, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 641, in run_until_complete
    self.run_forever()
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 608, in run_forever
    self._run_once()
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 1936, in _run_once
    handle._run()
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/events.py", line 84, in _run
    self._context.run(self._callback, *self._args)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 714, in read_from_input_in_context
    context.copy().run(read_from_input)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 690, in read_from_input
    keys = self.input.read_keys()
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/input/vt100.py", line 94, in read_keys
    data = self.stdin_reader.read()
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/input/posix_utils.py", line 87, in read
    data = os.read(self.stdin_fd, count)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/cli.py", line 11024, in _signal_handler
    raise KeyboardInterrupt()
KeyboardInterrupt

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 746, in _run_async
    result = await f
             ^^^^^^^
asyncio.exceptions.CancelledError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 751, in _run_async
    self._redraw(render_as_done=True)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 543, in _redraw
    self.context.copy().run(run_in_context)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 524, in run_in_context
    self.renderer.render(self, self.layout, is_done=render_as_done)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/renderer.py", line 726, in render
    output.flush()
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/output/vt100.py", line 706, in flush
    flush_stdout(self.stdout, data)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/output/flush_stdout.py", line 37, in flush_stdout
    stdout.flush()
OSError: [Errno 5] Input/output error

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 886, in run_async
    return await _run_async(f)
           ^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 756, in _run_async
    self.renderer.reset()
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/renderer.py", line 429, in reset
    self.output.flush()
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/output/vt100.py", line 706, in flush
    flush_stdout(self.stdout, data)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/output/flush_stdout.py", line 37, in flush_stdout
    stdout.flush()
OSError: [Errno 5] Input/output error
2026-04-29 14:43:23,004 WARNING [20260429_120226_462f20ee] gateway.platforms.weixin: [Weixin] send chunk failed to=o9cq801U attempt=1/5, retrying in 1.00s: Timeout context manager should be used inside a task
2026-04-29 14:43:24,008 WARNING [20260429_120226_462f20ee] gateway.platforms.weixin: [Weixin] send chunk failed to=o9cq801U attempt=2/5, retrying in 2.00s: Timeout context manager should be used inside a task
2026-04-29 14:43:26,014 WARNING [20260429_120226_462f20ee] gateway.platforms.weixin: [Weixin] send chunk failed to=o9cq801U attempt=3/5, retrying in 3.00s: Timeout context manager should be used inside a task
2026-04-29 14:43:29,022 WARNING [20260429_120226_462f20ee] gateway.platforms.weixin: [Weixin] send chunk failed to=o9cq801U attempt=4/5, retrying in 4.00s: Timeout context manager should be used inside a task
2026-04-29 14:43:33,030 ERROR [20260429_120226_462f20ee] gateway.platforms.weixin: [Weixin] send failed to=o9cq801U: Timeout context manager should be used inside a task
2026-04-29 15:43:49,934 ERROR asyncio: unhandled exception during asyncio.run() shutdown
task: <Task finished name='Task-1' coro=<Application.run_async() done, defined at /tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py:620> exception=OSError(5, 'Input/output error')>
Traceback (most recent call last):
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/runners.py", line 190, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 641, in run_until_complete
    self.run_forever()
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 608, in run_forever
    self._run_once()
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 1936, in _run_once
    handle._run()
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/events.py", line 84, in _run
    self._context.run(self._callback, *self._args)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 714, in read_from_input_in_context
    context.copy().run(read_from_input)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 694, in read_from_input
    self.key_processor.process_keys()
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/key_binding/key_processor.py", line 286, in process_keys
    self._start_timeout()
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/key_binding/key_processor.py", line 414, in _start_timeout
    self._flush_wait_task = app.create_background_task(wait())
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 1152, in create_background_task
    task: asyncio.Task[None] = loop.create_task(coroutine)
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/root/.local/share/uv/python/cpython-3.11.15-linux-x86_64-gnu/lib/python3.11/asyncio/base_events.py", line 430, in create_task
    def create_task(self, coro, *, name=None, context=None):
    
  File "/tmp/hermes-agent/cli.py", line 11024, in _signal_handler
    raise KeyboardInterrupt()
KeyboardInterrupt

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 746, in _run_async
    result = await f
             ^^^^^^^
asyncio.exceptions.CancelledError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 751, in _run_async
    self._redraw(render_as_done=True)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 543, in _redraw
    self.context.copy().run(run_in_context)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 524, in run_in_context
    self.renderer.render(self, self.layout, is_done=render_as_done)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/renderer.py", line 639, in render
    height = layout.container.preferred_height(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/layout/containers.py", line 318, in preferred_height
    dimensions = [
                 ^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/layout/containers.py", line 319, in <listcomp>
    c.preferred_height(width, max_available_height) for c in self._all_children
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/layout/containers.py", line 2634, in preferred_height
    def preferred_height(self, width: int, max_available_height: int) -> Dimension:
    
  File "/tmp/hermes-agent/cli.py", line 11024, in _signal_handler
    raise KeyboardInterrupt()
KeyboardInterrupt

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 886, in run_async
    return await _run_async(f)
           ^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py", line 756, in _run_async
    self.renderer.reset()
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/renderer.py", line 429, in reset
    self.output.flush()
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/output/vt100.py", line 706, in flush
    flush_stdout(self.stdout, data)
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/output/flush_stdout.py", line 37, in flush_stdout
    stdout.flush()
OSError: [Errno 5] Input/output error
2026-04-29 15:51:05,997 ERROR asyncio: Task was destroyed but it is pending!
task: <Task cancelling name='Task-1' coro=<Application.run_async() done, defined at /tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py:620> wait_for=<Future cancelled>>
2026-04-29 15:51:06,677 ERROR asyncio: Task was destroyed but it is pending!
task: <Task cancelling name='Task-5' coro=<Application._poll_output_size() running at /tmp/hermes-agent/venv/lib/python3.11/site-packages/prompt_toolkit/application/application.py:1226> wait_for=<Future cancelled> cb=[Application._on_background_task_done()]>
2026-04-30 00:46:20,053 WARNING gateway.platforms.weixin: [Weixin] send chunk failed to=o9cq801U attempt=1/5, retrying in 1.00s: 
2026-04-30 00:55:52,053 WARNING gateway.platforms.weixin: [Weixin] send chunk failed to=o9cq801U attempt=1/5, retrying in 1.00s: 
2026-04-30 12:28:22,266 WARNING gateway.platforms.weixin: [Weixin] rate limited for o9cq801U; backing off 3.0s before retry
2026-04-30 12:28:25,626 WARNING gateway.platforms.weixin: [Weixin] rate limited for o9cq801U; backing off 3.0s before retry
2026-04-30 12:28:28,994 WARNING gateway.platforms.weixin: [Weixin] rate limited for o9cq801U; backing off 3.0s before retry
2026-04-30 12:28:32,339 WARNING gateway.platforms.weixin: [Weixin] rate limited for o9cq801U; backing off 3.0s before retry
2026-04-30 12:28:35,686 ERROR gateway.platforms.weixin: [Weixin] send failed to=o9cq801U: iLink sendmessage rate limited: ret=-2 errcode=None errmsg=rate limited
2026-04-30 12:28:35,689 WARNING gateway.platforms.base: [Weixin] Send failed: iLink sendmessage rate limited: ret=-2 errcode=None errmsg=rate limited — trying plain-text fallback
2026-04-30 12:28:36,049 WARNING gateway.platforms.weixin: [Weixin] rate limited for o9cq801U; backing off 3.0s before retry
2026-04-30 12:28:39,429 WARNING gateway.platforms.weixin: [Weixin] rate limited for o9cq801U; backing off 3.0s before retry
2026-04-30 12:28:42,790 WARNING gateway.platforms.weixin: [Weixin] rate limited for o9cq801U; backing off 3.0s before retry
2026-04-30 12:28:46,121 WARNING gateway.platforms.weixin: [Weixin] rate limited for o9cq801U; backing off 3.0s before retry
2026-04-30 12:28:49,473 ERROR gateway.platforms.weixin: [Weixin] send failed to=o9cq801U: iLink sendmessage rate limited: ret=-2 errcode=None errmsg=rate limited
2026-04-30 12:28:49,474 ERROR gateway.platforms.base: [Weixin] Fallback send also failed: iLink sendmessage rate limited: ret=-2 errcode=None errmsg=rate limited
2026-04-30 23:57:12,168 ERROR tools.vision_tools: Error analyzing image: Error code: 429 - {'error': {'code': '1311', 'message': 'Your current subscription plan does not yet include access to GLM-5V-Turbo'}}
Traceback (most recent call last):
  File "/tmp/hermes-agent/tools/vision_tools.py", line 580, in vision_analyze_tool
    response = await async_call_llm(**call_kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/agent/auxiliary_client.py", line 3598, in async_call_llm
    await client.chat.completions.create(**kwargs), task)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/resources/chat/completions/completions.py", line 2714, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1913, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1698, in request
    raise self._make_status_error_from_response(err.response) from None
openai.RateLimitError: Error code: 429 - {'error': {'code': '1311', 'message': 'Your current subscription plan does not yet include access to GLM-5V-Turbo'}}
2026-04-30 23:57:22,601 ERROR [20260430_160237_592717] tools.vision_tools: Error analyzing image: Error code: 429 - {'error': {'code': '1311', 'message': 'Your current subscription plan does not yet include access to GLM-5V-Turbo'}}
Traceback (most recent call last):
  File "/tmp/hermes-agent/tools/vision_tools.py", line 580, in vision_analyze_tool
    response = await async_call_llm(**call_kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/agent/auxiliary_client.py", line 3598, in async_call_llm
    await client.chat.completions.create(**kwargs), task)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/resources/chat/completions/completions.py", line 2714, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1913, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1698, in request
    raise self._make_status_error_from_response(err.response) from None
openai.RateLimitError: Error code: 429 - {'error': {'code': '1311', 'message': 'Your current subscription plan does not yet include access to GLM-5V-Turbo'}}
2026-05-01 00:17:48,165 WARNING [20260430_160237_592717] agent.auxiliary_client: Vision provider google unavailable, falling back to auto vision backends
2026-05-01 00:17:50,181 ERROR [20260430_160237_592717] tools.vision_tools: Error analyzing image: Error code: 400 - {'error': {'code': '1211', 'message': 'Unknown Model, please check the model code.'}}
Traceback (most recent call last):
  File "/tmp/hermes-agent/tools/vision_tools.py", line 580, in vision_analyze_tool
    response = await async_call_llm(**call_kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/agent/auxiliary_client.py", line 3598, in async_call_llm
    await client.chat.completions.create(**kwargs), task)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/resources/chat/completions/completions.py", line 2714, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1913, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1698, in request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'code': '1211', 'message': 'Unknown Model, please check the model code.'}}
2026-05-01 00:18:02,140 WARNING [20260430_160237_592717] agent.auxiliary_client: Vision provider google unavailable, falling back to auto vision backends
2026-05-01 00:18:04,124 ERROR [20260430_160237_592717] tools.vision_tools: Error analyzing image: Error code: 400 - {'error': {'code': '1211', 'message': 'Unknown Model, please check the model code.'}}
Traceback (most recent call last):
  File "/tmp/hermes-agent/tools/vision_tools.py", line 580, in vision_analyze_tool
    response = await async_call_llm(**call_kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/agent/auxiliary_client.py", line 3598, in async_call_llm
    await client.chat.completions.create(**kwargs), task)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/resources/chat/completions/completions.py", line 2714, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1913, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1698, in request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'code': '1211', 'message': 'Unknown Model, please check the model code.'}}
2026-05-01 00:18:16,032 WARNING [20260430_160237_592717] agent.auxiliary_client: Vision provider google unavailable, falling back to auto vision backends
2026-05-01 00:18:18,034 ERROR [20260430_160237_592717] tools.vision_tools: Error analyzing image: Error code: 400 - {'error': {'code': '1211', 'message': 'Unknown Model, please check the model code.'}}
Traceback (most recent call last):
  File "/tmp/hermes-agent/tools/vision_tools.py", line 580, in vision_analyze_tool
    response = await async_call_llm(**call_kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/agent/auxiliary_client.py", line 3598, in async_call_llm
    await client.chat.completions.create(**kwargs), task)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/resources/chat/completions/completions.py", line 2714, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1913, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1698, in request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'code': '1211', 'message': 'Unknown Model, please check the model code.'}}
2026-05-01 00:19:19,976 ERROR [20260430_160237_592717] tools.vision_tools: Error analyzing image: Error code: 404 - [{'error': {'code': 404, 'message': 'models/gemini-1.5-pro is not found for API version v1main, or is not supported for generateContent. Call ListModels to see the list of available models and their supported methods.', 'status': 'NOT_FOUND'}}]
Traceback (most recent call last):
  File "/tmp/hermes-agent/tools/vision_tools.py", line 580, in vision_analyze_tool
    response = await async_call_llm(**call_kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/agent/auxiliary_client.py", line 3598, in async_call_llm
    await client.chat.completions.create(**kwargs), task)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/resources/chat/completions/completions.py", line 2714, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1913, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1698, in request
    raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - [{'error': {'code': 404, 'message': 'models/gemini-1.5-pro is not found for API version v1main, or is not supported for generateContent. Call ListModels to see the list of available models and their supported methods.', 'status': 'NOT_FOUND'}}]
2026-05-01 00:19:30,516 ERROR [20260430_160237_592717] tools.vision_tools: Error analyzing image: Error code: 404 - [{'error': {'code': 404, 'message': 'models/gemini-1.5-flash is not found for API version v1main, or is not supported for generateContent. Call ListModels to see the list of available models and their supported methods.', 'status': 'NOT_FOUND'}}]
Traceback (most recent call last):
  File "/tmp/hermes-agent/tools/vision_tools.py", line 580, in vision_analyze_tool
    response = await async_call_llm(**call_kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/agent/auxiliary_client.py", line 3598, in async_call_llm
    await client.chat.completions.create(**kwargs), task)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/resources/chat/completions/completions.py", line 2714, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1913, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1698, in request
    raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - [{'error': {'code': 404, 'message': 'models/gemini-1.5-flash is not found for API version v1main, or is not supported for generateContent. Call ListModels to see the list of available models and their supported methods.', 'status': 'NOT_FOUND'}}]
2026-05-01 00:19:39,481 ERROR [20260430_160237_592717] tools.vision_tools: Error analyzing image: Error code: 404 - [{'error': {'code': 404, 'message': 'models/gemini-pro-vision is not found for API version v1main, or is not supported for generateContent. Call ListModels to see the list of available models and their supported methods.', 'status': 'NOT_FOUND'}}]
Traceback (most recent call last):
  File "/tmp/hermes-agent/tools/vision_tools.py", line 580, in vision_analyze_tool
    response = await async_call_llm(**call_kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/agent/auxiliary_client.py", line 3598, in async_call_llm
    await client.chat.completions.create(**kwargs), task)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/resources/chat/completions/completions.py", line 2714, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1913, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1698, in request
    raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - [{'error': {'code': 404, 'message': 'models/gemini-pro-vision is not found for API version v1main, or is not supported for generateContent. Call ListModels to see the list of available models and their supported methods.', 'status': 'NOT_FOUND'}}]
2026-05-01 00:19:52,062 ERROR [20260430_160237_592717] tools.vision_tools: Error analyzing image: Error code: 404 - [{'error': {'code': 404, 'message': 'models/gemini-pro is not found for API version v1main, or is not supported for generateContent. Call ListModels to see the list of available models and their supported methods.', 'status': 'NOT_FOUND'}}]
Traceback (most recent call last):
  File "/tmp/hermes-agent/tools/vision_tools.py", line 580, in vision_analyze_tool
    response = await async_call_llm(**call_kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/agent/auxiliary_client.py", line 3598, in async_call_llm
    await client.chat.completions.create(**kwargs), task)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/resources/chat/completions/completions.py", line 2714, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1913, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1698, in request
    raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - [{'error': {'code': 404, 'message': 'models/gemini-pro is not found for API version v1main, or is not supported for generateContent. Call ListModels to see the list of available models and their supported methods.', 'status': 'NOT_FOUND'}}]
2026-05-01 00:24:06,721 ERROR tools.vision_tools: Error analyzing image: Error code: 404 - [{'error': {'code': 404, 'message': 'models/gemini-pro is not found for API version v1main, or is not supported for generateContent. Call ListModels to see the list of available models and their supported methods.', 'status': 'NOT_FOUND'}}]
Traceback (most recent call last):
  File "/tmp/hermes-agent/tools/vision_tools.py", line 580, in vision_analyze_tool
    response = await async_call_llm(**call_kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/agent/auxiliary_client.py", line 3598, in async_call_llm
    await client.chat.completions.create(**kwargs), task)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/resources/chat/completions/completions.py", line 2714, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1913, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1698, in request
    raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - [{'error': {'code': 404, 'message': 'models/gemini-pro is not found for API version v1main, or is not supported for generateContent. Call ListModels to see the list of available models and their supported methods.', 'status': 'NOT_FOUND'}}]
2026-05-01 00:24:28,242 ERROR [20260430_160237_592717] tools.vision_tools: Error analyzing image: Error code: 404 - [{'error': {'code': 404, 'message': 'models/gemini-pro is not found for API version v1main, or is not supported for generateContent. Call ListModels to see the list of available models and their supported methods.', 'status': 'NOT_FOUND'}}]
Traceback (most recent call last):
  File "/tmp/hermes-agent/tools/vision_tools.py", line 580, in vision_analyze_tool
    response = await async_call_llm(**call_kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/agent/auxiliary_client.py", line 3598, in async_call_llm
    await client.chat.completions.create(**kwargs), task)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/resources/chat/completions/completions.py", line 2714, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1913, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1698, in request
    raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - [{'error': {'code': 404, 'message': 'models/gemini-pro is not found for API version v1main, or is not supported for generateContent. Call ListModels to see the list of available models and their supported methods.', 'status': 'NOT_FOUND'}}]
2026-05-01 00:26:38,744 ERROR [20260430_160237_592717] tools.vision_tools: Error analyzing image: Error code: 404 - [{'error': {'code': 404, 'message': 'models/gemini-pro is not found for API version v1main, or is not supported for generateContent. Call ListModels to see the list of available models and their supported methods.', 'status': 'NOT_FOUND'}}]
Traceback (most recent call last):
  File "/tmp/hermes-agent/tools/vision_tools.py", line 580, in vision_analyze_tool
    response = await async_call_llm(**call_kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/agent/auxiliary_client.py", line 3598, in async_call_llm
    await client.chat.completions.create(**kwargs), task)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/resources/chat/completions/completions.py", line 2714, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1913, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1698, in request
    raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - [{'error': {'code': 404, 'message': 'models/gemini-pro is not found for API version v1main, or is not supported for generateContent. Call ListModels to see the list of available models and their supported methods.', 'status': 'NOT_FOUND'}}]
2026-05-01 00:26:50,778 WARNING [20260430_160237_592717] agent.auxiliary_client: Vision provider google unavailable, falling back to auto vision backends
2026-05-01 00:26:52,263 ERROR [20260430_160237_592717] tools.vision_tools: Error analyzing image: Error code: 400 - {'error': {'code': '1211', 'message': 'Unknown Model, please check the model code.'}}
Traceback (most recent call last):
  File "/tmp/hermes-agent/tools/vision_tools.py", line 580, in vision_analyze_tool
    response = await async_call_llm(**call_kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/agent/auxiliary_client.py", line 3598, in async_call_llm
    await client.chat.completions.create(**kwargs), task)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/resources/chat/completions/completions.py", line 2714, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1913, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1698, in request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'code': '1211', 'message': 'Unknown Model, please check the model code.'}}
2026-05-01 00:27:36,745 WARNING [20260430_160237_592717] agent.auxiliary_client: Vision provider google unavailable, falling back to auto vision backends
2026-05-01 00:27:38,367 ERROR [20260430_160237_592717] tools.vision_tools: Error analyzing image: Error code: 400 - {'error': {'code': '1211', 'message': 'Unknown Model, please check the model code.'}}
Traceback (most recent call last):
  File "/tmp/hermes-agent/tools/vision_tools.py", line 580, in vision_analyze_tool
    response = await async_call_llm(**call_kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/agent/auxiliary_client.py", line 3598, in async_call_llm
    await client.chat.completions.create(**kwargs), task)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/resources/chat/completions/completions.py", line 2714, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1913, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1698, in request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'code': '1211', 'message': 'Unknown Model, please check the model code.'}}
2026-05-01 00:29:16,386 WARNING agent.auxiliary_client: Vision provider google unavailable, falling back to auto vision backends
2026-05-01 00:29:20,106 ERROR tools.vision_tools: Error analyzing image: Error code: 400 - {'error': {'code': '1211', 'message': 'Unknown Model, please check the model code.'}}
Traceback (most recent call last):
  File "/tmp/hermes-agent/tools/vision_tools.py", line 580, in vision_analyze_tool
    response = await async_call_llm(**call_kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/agent/auxiliary_client.py", line 3598, in async_call_llm
    await client.chat.completions.create(**kwargs), task)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/resources/chat/completions/completions.py", line 2714, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1913, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1698, in request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'code': '1211', 'message': 'Unknown Model, please check the model code.'}}
2026-05-01 00:32:57,843 WARNING agent.auxiliary_client: Vision provider google unavailable, falling back to auto vision backends
2026-05-01 00:33:01,841 ERROR tools.vision_tools: Error analyzing image: Error code: 400 - {'error': {'code': '1211', 'message': 'Unknown Model, please check the model code.'}}
Traceback (most recent call last):
  File "/tmp/hermes-agent/tools/vision_tools.py", line 580, in vision_analyze_tool
    response = await async_call_llm(**call_kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/agent/auxiliary_client.py", line 3598, in async_call_llm
    await client.chat.completions.create(**kwargs), task)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/resources/chat/completions/completions.py", line 2714, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1913, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/hermes-agent/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1698, in request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'code': '1211', 'message': 'Unknown Model, please check the model code.'}}
