Skip to content

Commit 1cf788e

Browse files
Merge pull request #2 from TwistingTwists/drafts
Drafts
2 parents 31cf8bd + 3b45594 commit 1cf788e

24 files changed

+21661
-4
lines changed

tailwind.config.js

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
// tailwind.config.js
2+
module.exports = {
3+
content: ["./src/**/*.{js,jsx,ts,tsx}"],
4+
theme: {
5+
extend: {},
6+
},
7+
plugins: [],
8+
corePlugins: {
9+
preflight: false, // Prevents Tailwind from resetting Docusaurus styles
10+
},
11+
darkMode: ['class', '[data-theme="dark"]'], // Enables compatibility with Docusaurus dark mode
12+
};
13+
Lines changed: 198 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,198 @@
1+
---
2+
slug: "forevervm-minimal"
3+
title: "Minimal example of ForeverVM"
4+
date: 2025-03-01T00:00:00+00:00
5+
draft: true
6+
authors: [abeeshake]
7+
tags: [ python, gvisor ]
8+
---
9+
10+
11+
# Building ForeverVM
12+
13+
## Introduction
14+
15+
In the rapidly evolving landscape of Large Language Models (LLMs), one of the most powerful capabilities is code generation and execution. However, executing untrusted code generated by LLMs poses significant security risks. This blog post explores a solution: a gVisor-based Python REPL that provides a secure, isolated environment for executing untrusted code in LLM-based workflows.
16+
17+
## The Challenge of Code Execution in LLM Applications
18+
19+
LLMs like GPT-4, Claude, and others have demonstrated remarkable capabilities in generating functional code across various programming languages. This has led to the development of "agentic" applications where LLMs can:
20+
21+
1. Generate code to solve specific problems
22+
2. Execute that code to obtain results
23+
3. Analyze the results and iterate on the solution
24+
4. Interact with external systems and APIs
25+
26+
However, executing code generated by LLMs introduces several challenges:
27+
28+
- **Security Risks**: LLM-generated code might contain vulnerabilities, either accidentally or through prompt injection attacks
29+
- **Resource Management**: Unconstrained execution could lead to resource exhaustion
30+
- **Stateful Execution**: Many tasks require maintaining state between multiple code executions
31+
- **Isolation**: Code execution needs to be isolated from the host system and other users' code
32+
33+
## Introducing gVisor-based Python REPL
34+
35+
Our solution addresses these challenges through a multi-layered security approach:
36+
37+
1. **Docker Containerization**: Provides basic isolation from the host system
38+
2. **gVisor Sandbox**: Adds an additional security layer by intercepting and filtering system calls
39+
3. **TCP Interface**: Limits interaction to a simple TCP API, reducing attack surface
40+
4. **Session Management**: Maintains stateful execution while ensuring isolation between sessions
41+
42+
### Architecture Overview
43+
44+
The system consists of several key components:
45+
46+
1. **TCP Server**: A Python TCP server that accepts code execution requests and maintains stateful sessions
47+
2. **Docker Container**: Provides containerization for the Python environment
48+
3. **gVisor Runtime**: Adds an additional layer of isolation by intercepting and filtering system calls
49+
50+
### How It Works
51+
52+
1. The server runs inside a Docker container with gVisor's runsc runtime
53+
2. Clients connect to the server via TCP and send Python code to execute
54+
3. The server executes the code in a session-specific environment
55+
4. Results are returned to the client
56+
5. Sessions persist between requests, allowing for stateful execution
57+
58+
## Setting Up Your Own Secure Python REPL
59+
60+
### Prerequisites
61+
62+
- Docker
63+
- gVisor (runsc)
64+
65+
### Installation Steps
66+
67+
1. Install gVisor:
68+
```bash
69+
sudo apt-get update && \
70+
sudo apt-get install -y \
71+
apt-transport-https \
72+
ca-certificates \
73+
curl \
74+
gnupg
75+
76+
# Install runsc
77+
curl -fsSL https://gvisor.dev/archive.key | sudo gpg --dearmor -o /usr/share/keyrings/gvisor-archive-keyring.gpg
78+
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/gvisor-archive-keyring.gpg] https://storage.googleapis.com/gvisor/releases release main" | sudo tee /etc/apt/sources.list.d/gvisor.list > /dev/null
79+
sudo apt-get update && sudo apt-get install -y runsc
80+
```
81+
82+
2. Configure Docker to use gVisor:
83+
```bash
84+
sudo runsc install
85+
sudo systemctl restart docker
86+
```
87+
88+
3. Clone the repository:
89+
```bash
90+
git clone https://github.com/username/gvisor-based-python-repl.git
91+
cd gvisor-based-python-repl
92+
```
93+
94+
4. Run the server:
95+
```bash
96+
./run.sh
97+
```
98+
99+
## Integrating with LLM Applications
100+
101+
### Client-Side Integration
102+
103+
To integrate this secure Python REPL with your LLM application, you'll need to:
104+
105+
1. Establish a TCP connection to the server
106+
2. Send Python code generated by your LLM
107+
3. Receive and process the execution results
108+
4. Optionally maintain session state for multi-step workflows
109+
110+
Here's a simple Python example:
111+
112+
```python
113+
import socket
114+
import json
115+
116+
def send_code_to_repl(code, session_id=None):
117+
# Connect to the server
118+
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
119+
sock.connect(("localhost", 8000))
120+
121+
# Skip initial greeting
122+
length_bytes = sock.recv(4)
123+
message_length = int.from_bytes(length_bytes, byteorder='big')
124+
sock.recv(message_length)
125+
126+
# Prepare request
127+
request = {
128+
"code": code
129+
}
130+
if session_id:
131+
request["session_id"] = session_id
132+
133+
# Send request
134+
request_json = json.dumps(request)
135+
request_bytes = request_json.encode('utf-8')
136+
length = len(request_bytes)
137+
sock.sendall(length.to_bytes(4, byteorder='big'))
138+
sock.sendall(request_bytes)
139+
140+
# Receive response
141+
length_bytes = sock.recv(4)
142+
message_length = int.from_bytes(length_bytes, byteorder='big')
143+
response_bytes = sock.recv(message_length)
144+
response = json.loads(response_bytes.decode('utf-8'))
145+
146+
sock.close()
147+
return response
148+
149+
# Example usage
150+
response = send_code_to_repl("print('Hello, world!')")
151+
print(response["output"])
152+
153+
# Use the session_id for subsequent requests
154+
session_id = response["session_id"]
155+
response = send_code_to_repl("x = 42\nprint(f'x = {x}')", session_id)
156+
print(response["output"])
157+
```
158+
159+
### LLM Workflow Integration
160+
161+
Here's how you might integrate this into an LLM-based workflow:
162+
163+
1. User provides a problem description
164+
2. LLM generates Python code to solve the problem
165+
3. Your application sends the code to the secure REPL
166+
4. Execution results are returned to the LLM
167+
5. LLM analyzes the results and iterates if needed
168+
169+
This approach allows for powerful agentic workflows while maintaining security.
170+
171+
## Security Considerations
172+
173+
While this system provides strong isolation, it's important to understand its limitations:
174+
175+
- Side-channel attacks might still be possible
176+
- Resource exhaustion could affect container performance
177+
- New vulnerabilities in gVisor or Docker could compromise security
178+
179+
Regular updates and security audits are recommended.
180+
181+
## Use Cases
182+
183+
This secure Python REPL is particularly useful for:
184+
185+
1. **Educational Platforms**: Allow students to execute code safely
186+
2. **AI Coding Assistants**: Enable LLMs to execute and test generated code
187+
3. **Data Analysis Workflows**: Run data processing scripts in a secure environment
188+
4. **Automated Code Testing**: Test user-submitted code without security risks
189+
5. **Interactive Documentation**: Create interactive code examples that users can modify and run
190+
191+
## Conclusion
192+
193+
The gVisor-based Python REPL provides a secure foundation for executing untrusted code in LLM-based workflows. By combining Docker containerization with gVisor's sandboxing capabilities, it offers a robust solution to the security challenges of code execution in AI applications.
194+
195+
As LLMs continue to evolve and become more integrated into software development workflows, tools like this will be essential for balancing the power of code generation with the necessary security constraints.
196+
197+
Whether you're building an AI coding assistant, an educational platform, or any application that involves executing untrusted code, this approach provides a practical and secure solution.
198+
Footer
Lines changed: 127 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,127 @@
1+
---
2+
slug: "http-client-rust"
3+
title: "Versatile HTTP Client in Rust"
4+
date: 2025-03-09T00:00:00+00:00
5+
authors: [abeeshake]
6+
tags: [ rust ]
7+
draft: true
8+
---
9+
10+
11+
12+
- generate a summary of codebase
13+
- Generate title with framework (title, body summary, audience, rating, alex hormozi value framework)
14+
-
15+
16+
17+
https://github.com/zed-industries/zed/blob/4846e6fb3ab046e3b746f442257adff6730a3187/crates/http_client/src/async_body.rs
18+
19+
20+
---- framework
21+
```
22+
title: "Sample Title"
23+
Body_summary:
24+
providing a generic HTTP client framework with features like redirects and proxy support.focuses on managing asynchronous request bodies for HTTP requests. It includes:Implementation of AsyncRead, ensuring the body can be read asynchronously, which is crucial for streaming data.
25+
Testing support with BlockedHttpClient, which always fails (useful for scenarios where network access is blocked), and FakeHttpClient, which allows mocking HTTP responses for testing purposes.
26+
27+
28+
audience: rust developers, curious engineers looking for leveling up the software skills
29+
30+
31+
When you generate the title for the blog, also analyse the value delivered using the following formula for value:
32+
33+
Value = ( DreamOutcome x PerceivedLikelihood Of Achievement ) / (time delay x effort and sacrifice)
34+
35+
It consists of 4 variables.
36+
37+
Value is directly proportional to 2 variables = DreamOutcome x PerceivedLikelihood Of Achievement.
38+
39+
ANd Value is inversely proportional to other two variables = time delay x effort and sacrifice.
40+
41+
Here is what those 4 variables in value equation mean.
42+
43+
Dream outcome: What is the dream outcome that you want to achieve?
44+
45+
Perceived likelihood of success: How likely do you think it is that you will achieve this outcome?
46+
47+
Perceived time to success: How long do you think it will take to achieve this outcome?
48+
49+
Perceived difficulty of success: How difficult do you think it will be to achieve this outcome?
50+
51+
DO the value analysis of your title.
52+
53+
For each of the variables above, generate a table with the following columns (and give numbers out of 5. larger number indicates better outcome )
54+
dream outcome, Perceived likelihood of success, Perceived time to success, Perceived difficulty of success
55+
56+
57+
------
58+
this code compiles in baml. use it when you can.
59+
```ts
60+
61+
class ContentAnalysis {
62+
title string @description("Generated title for the content")
63+
valueMetrics ValueAnalysis
64+
}
65+
66+
class DreamClass {
67+
score int @description("Score from 0-5")
68+
reasoning string @description("Detailed explanation for the score")
69+
}
70+
71+
class LikelihoodClass {
72+
score int @description("Score from 0-5")
73+
reasoning string @description("Supporting factors and explanation for confidence score")
74+
}
75+
76+
class EffortClass {
77+
score int @description("Score from 0-5")
78+
reasoning string @description("Breakdown of effort requirements and justification")
79+
}
80+
81+
class ValueAnalysis {
82+
dreamOutcome DreamClass @description("Assessment of importance and benefit")
83+
likelihood LikelihoodClass @description("Confidence assessment with reasoning")
84+
timeDelay int @description("Estimated time required in days")
85+
effort EffortClass @description("Effort level assessment with explanation")
86+
}
87+
88+
function AnalyzeContentValue(content: string) -> ContentAnalysis[] {
89+
client "openai/gpt-4o-mini"
90+
prompt #"
91+
You are an expert content strategist and business analyst. Analyze the given content to:
92+
1. Generate an engaging, SEO-friendly title
93+
2. Calculate the potential value using this formula:
94+
Value = (DreamOutcome × Likelihood) / (TimeDelay × Effort)
95+
96+
Rate components on a scale of 0-5 (whole numbers only) except for timeDelay which is in days.
97+
Consider audience impact, and increasing the value proposition in your analysis.
98+
99+
Give a list of 5 options for content and value proposition for each one in following format.
100+
101+
{{ ctx.output_format }}
102+
103+
{{ _.role("user") }} {{ content }}
104+
"#
105+
}
106+
107+
test SimpleContent {
108+
functions [AnalyzeContentValue]
109+
args {
110+
content "Building a new React component library with accessibility features and modern design patterns."
111+
}
112+
}
113+
114+
test TechnicalContent {
115+
functions [AnalyzeContentValue]
116+
args {
117+
content #"
118+
Building an HTTP client framework in Rust:
119+
- Async request body handling
120+
- Redirect and proxy support
121+
- Test utilities including BlockedHttpClient and FakeHttpClient
122+
Target audience: Rust developers and engineers interested in systems programming
123+
"#
124+
}
125+
}
126+
```
127+
```
Lines changed: 55 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,55 @@
1+
---
2+
slug: "connection-pooling-in-depth"
3+
title: "Connection Pooling - in Depth"
4+
date: 2025-03-13T00:00:00+00:00
5+
authors: [abeeshake]
6+
tags: [ connection, database, network ]
7+
draft: false
8+
---
9+
10+
Here’s a **Markdown table** that maps **real-life reverse proxy scenarios** to recommended **TCP tuning parameters** for optimal performance and security:
11+
12+
---
13+
14+
### **TCP Tuning Recommendations for Reverse Proxy - Real Life Scenarios**
15+
16+
| **Scenario** | **tcp_fin_timeout** | **tcp_keepalive_time** | **tcp_keepalive_intvl** | **tcp_keepalive_probes** | **tcp_retries2** | **Reasoning & Trade-offs** |
17+
|------------------------------------------------|---------------------|------------------------|-------------------------|--------------------------|------------------|--------------------------------------------------------------------------------------------------------------|
18+
| **Public API Gateway (high concurrent clients)** | `15` | `30` | `10` | `3` | `5` | Quick cleanup of dead/idle connections to save resources, while allowing short keep-alives for API clients. |
19+
| **Internal microservices (low latency, stable network)** | `10` | `60` | `20` | `3` | `3` | Fast connection recycling, rare need for keep-alives due to low latency, prioritizing efficiency. |
20+
| **Mobile-heavy client traffic (prone to network drops)** | `30` | `120` | `20` | `5` | `7` | More lenient timeouts to account for intermittent mobile network instability; avoid prematurely dropping clients. |
21+
| **WebSocket / long-lived connections (chat apps, gaming)** | `60` | `300` | `60` | `5` | `8` | Allow long idle connections; keep-alives to detect dead connections without cutting active clients abruptly. |
22+
| **DDoS-prone public proxy (security-focused)** | `5` | `30` | `5` | `2` | `3` | Aggressive timeouts to prevent resource exhaustion; fast cleanup of potentially malicious connections. |
23+
| **IoT Device Communication (sporadic, unstable)** | `30` | `180` | `30` | `4` | `6` | Longer keep-alives to maintain connection with low-power devices, balanced with cleanup to avoid idle hangs. |
24+
| **Slow clients behind proxies (corporate clients, satellite)** | `20` | `150` | `30` | `4` | `6` | Moderate timeouts to handle slow networks without dropping legitimate users. |
25+
26+
---
27+
28+
### **Legend (Quick Reference)**
29+
| **Parameter** | **Purpose** |
30+
|-------------------------------|----------------------------------------------|
31+
| `tcp_fin_timeout` | How long to keep closing connection in FIN state. |
32+
| `tcp_keepalive_time` | Idle time before sending first keep-alive probe. |
33+
| `tcp_keepalive_intvl` | Interval between successive keep-alive probes. |
34+
| `tcp_keepalive_probes` | Number of probes before dropping connection. |
35+
| `tcp_retries2` | Max TCP retransmissions before giving up. |
36+
37+
---
38+
39+
### ⚙️ **Notes:**
40+
- **Lower timeouts**: Free up resources quickly, but risk dropping slow/legit connections.
41+
- **Higher timeouts**: Improve user experience over slow networks but consume more resources.
42+
- **Keep-alive settings**: Essential for long-lived or idle connections to detect dead peers.
43+
- **Retries**: Trade-off between network resilience and resource use.
44+
45+
---
46+
47+
If you want, I can prepare a **`sysctl.conf` file snippet** based on any of these scenarios for direct use. Let me know! 🚀
48+
49+
50+
51+
----
52+
53+
source: https://github.com/brettwooldridge/HikariCP/wiki/Down-the-Rabbit-Hole
54+
55+
----

0 commit comments

Comments
 (0)