# State Management Standards for Clean Code
This document outlines coding standards for state management within Clean Code principles. It provides specific guidelines and examples to ensure code related to state is maintainable, readable, performant, and secure. These standards are designed to work with the latest recommended practices and features within the Clean Code ecosystem.
## 1. Principles of Clean State Management
Clean state management is about structuring your application's data in a way that's predictable, manageable, and testable. It involves making state changes explicit, limiting side effects, and ensuring data consistency. Applying clean code principles to state enhances maintainability, reduces bugs, and improves collaborative development.
* **Single Source of Truth:** Ensure each piece of data has one authoritative source. This prevents inconsistencies and simplifies debugging.
* **Immutability:** Favor immutable data structures. Immutable data makes state changes more predictable and helps prevent unintended side effects.
* **Explicit State Transitions:** State transitions should be clear and well-defined, making it easier to understand how the application evolves over time.
* **Separation of Concerns:** Keep state management logic separate from UI components or business logic. This enhances modularity and testability.
* **Minimal Global State:** Limit the use of global state. Widespread global state can make it difficult to track dependencies and lead to unexpected behavior.
## 2. Architectural Patterns for State Management
Choosing the right architecture for state management depends on the complexity of the application. Here are a few common patterns and guidelines:
### 2.1 Local State
Managing state within a single component should be a default option. You typically use local state for isolated functionalities that don't necessitate sharing state or reactivity beyond the component’s scope.
* **Do This:** Use local state for isolated component features.
* **Don't Do This:** Share local state directly between unrelated components.
"""javascript
// Example React local state using useState
import React, { useState } from 'react';
function Counter() {
const [count, setCount] = useState(0);
return (
<p>Count: {count}</p>
setCount(count + 1)}>Increment
);
}
"""
### 2.2 Redux Pattern (Centralized State)
The Redux pattern emphasizes a single store for application state, using reducers to handle actions and state transitions immutably.
* **Do This:**
* Use Redux or similar libraries for complex, application-wide state.
* Define actions as plain objects with a "type" field.
* Use pure functions as reducers to ensure predictable state transitions.
* Selectors should cache results to prevent unnecessary re-renders.
* **Don't Do This:**
* Mutate the state directly in reducers.
* Perform asynchronous operations directly in reducers.
* Overuse Redux for simple components with minimal state.
"""javascript
// Example Redux setup
// Action
const INCREMENT = 'INCREMENT';
const DECREMENT = 'DECREMENT';
const increment = () => ({ type: INCREMENT });
const decrement = () => ({ type: DECREMENT });
// Reducer
const initialState = { count: 0 };
const counterReducer = (state = initialState, action) => {
switch (action.type) {
case INCREMENT:
return { ...state, count: state.count + 1 };
case DECREMENT:
return { ...state, count: state.count - 1 };
default:
return state;
}
};
// Store creation
import { createStore } from 'redux';
const store = createStore(counterReducer);
// Component integration (React example)
import { useSelector, useDispatch } from 'react-redux';
function CounterComponent() {
const count = useSelector(state => state.count);
const dispatch = useDispatch();
return (
<p>Count: {count}</p>
dispatch(increment())}>Increment
dispatch(decrement())}>Decrement
);
}
"""
### 2.3 Context API (Scoped State)
Context API provides a way to pass data through the component tree without having to pass props manually at every level. While it is simpler than Redux it is still intended for scenarios that benefit from shared state.
* **Do This:**
* Use Context API for theming, user authentication, or other application-wide configurations.
* Use "useContext" hook to consume context values.
* Combine Context API with "useReducer" for complex state logic.
* **Don't Do This:**
* Use Context API as a general replacement for prop drilling in scenarios where component composition is better suited.
* Overuse Context API resulting in unnecessary re-renders.
"""javascript
// Example Context API setup
import React, { createContext, useContext, useState } from 'react';
// Create Context
const ThemeContext = createContext();
// Context Provider
function ThemeProvider({ children }) {
const [theme, setTheme] = useState('light');
const toggleTheme = () => {
setTheme(prevTheme => (prevTheme === 'light' ? 'dark' : 'light'));
};
return (
{children}
);
}
// Custom Hook to consume Context
function useTheme() {
return useContext(ThemeContext);
}
// Component using Context
function ThemeToggler() {
const { theme, toggleTheme } = useTheme();
return (
Toggle Theme (Current: {theme})
);
}
// Usage in App
function App() {
return (
);
}
"""
### 2.4 Observable Pattern (Reactive State)
The observable pattern, often implemented with libraries like RxJS, is used for handling asynchronous data streams and complex event-driven applications.
* **Do This:**
* Use RxJS or similar libraries for handling asynchronous data streams.
* Structure application logic as a pipeline of observable transformations.
* Use subjects to bridge different parts of the application.
* **Don't Do This:**
* Overuse RxJS for simple event handling.
* Introduce memory leaks by not unsubscribing from observables.
* Create overly complex observable chains that are hard to understand.
"""javascript
// Example RxJS setup
import { fromEvent, interval } from 'rxjs';
import { map, filter, scan, takeUntil } from 'rxjs/operators';
// Example: Click counter observable
const button = document.getElementById('myButton');
const click$ = fromEvent(button, 'click');
const counter$ = click$.pipe(
map(() => 1),
scan((acc, value) => acc + value, 0)
);
counter$.subscribe(count => {
console.log("Button clicked ${count} times");
});
// Example: Auto-incrementing counter that stops after 5 seconds
const interval$ = interval(1000);
const stop$ = fromEvent(document.getElementById('stopButton'), 'click');
interval$.pipe(
takeUntil(stop$) // Stop the interval when the stop button is clicked
).subscribe(val => console.log("Interval value: ${val}"));
"""
### 2.5 State Machines
State machines are useful for managing complex state transitions with clearly defined states and transitions.
* **Do This:**
* Use state machines for scenarios with clearly defined states and transitions.
* Model state transitions explicitly, reducing possible unexpected states.
* Ensure state machines are well-documented, especially for complex systems.
* **Don't Do This:**
* Overuse state machines for simple state management.
* Create monolithic state machines that are difficult to understand.
"""javascript
// Example: JavaScript state machine using XState
import { createMachine, interpret } from 'xstate';
// Define the state machine
const trafficLightMachine = createMachine({
id: 'trafficLight',
initial: 'green',
states: {
green: {
after: {
5000: 'yellow' // After 5 seconds, transition to yellow
}
},
yellow: {
after: {
1000: 'red' // After 1 second, transition to red
}
},
red: {
after: {
6000: 'green' // After 6 seconds, transition to green
}
}
}
});
// Interpret the state machine
const trafficService = interpret(trafficLightMachine).start();
trafficService.onTransition(state => {
console.log("Traffic light is now ${state.value}");
});
// Example usage (simulating events or external triggers)
// trafficService.send('TIMER');
"""
## 3. Implementing Immutability
Immutability ensures that once an object is created, its state cannot be changed. This helps prevent accidental state mutations, making it easier to track and manage state changes, which aids in debugging and improves performance in certain scenarios.
* **Do This:**
* Use immutable data structures and operations.
* Make copies of objects or arrays before modifying them.
* Employ libraries like Immutable.js for more complex scenarios.
* **Don't Do This:**
* Directly modify object properties or array elements.
* Assume that passing an object or array creates a new copy.
### 3.1 JavaScript Immutability Techniques
"""javascript
// Immutable Object Update
const originalObject = { name: 'John', age: 30 };
const updatedObject = { ...originalObject, age: 31 }; // Create a new object
// Immutable Array Update
const originalArray = [1, 2, 3];
const updatedArray = [...originalArray, 4]; // Create a new array
const removedArray = originalArray.filter(item => item !== 2); // Create new array without '2'
console.log(originalObject); // { name: 'John', age: 30 }
console.log(updatedObject); // { name: 'John', age: 31 }
console.log(originalArray); // [1, 2, 3]
console.log(updatedArray); // [1, 2, 3, 4]
console.log(removedArray); // [1, 3]
"""
### 3.2 Immutable.js
Immutable.js provides persistent immutable data structures, improving performance and simplifying state management for complex applications.
"""javascript
import { Map, List } from 'immutable';
// Immutable Map
const originalMap = Map({ name: 'John', age: 30 });
const updatedMap = originalMap.set('age', 31);
// Immutable List
const originalList = List([1, 2, 3]);
const updatedList = originalList.push(4);
console.log(originalMap.toJS()); // { name: 'John', age: 30 }
console.log(updatedMap.toJS()); // { name: 'John', age: 31 }
console.log(originalList.toJS()); // [1, 2, 3]
console.log(updatedList.toJS()); // [1, 2, 3, 4]
"""
## 4. Handling Side Effects
Side effects are operations that affect the state of the application outside of the current function or component. Properly managing side effects is crucial for maintaining predictable and testable code.
* **Do This:**
* Isolate side effects in dedicated functions or modules.
* Use effect hooks (e.g., "useEffect" in React) to manage side effects in components.
* Handle errors gracefully when performing side effects.
* **Don't Do This:**
* Perform side effects directly within reducers or pure functions.
* Ignore potential errors in side effect operations.
### 4.1 Managing Effects with "useEffect"
"""javascript
import React, { useState, useEffect } from 'react';
function DataFetcher({ url }) {
const [data, setData] = useState(null);
const [loading, setLoading] = useState(true);
const [error, setError] = useState(null);
useEffect(() => {
const fetchData = async () => {
try {
const response = await fetch(url);
if (!response.ok) {
throw new Error("HTTP error! status: ${response.status}");
}
const result = await response.json();
setData(result);
} catch (e) {
setError(e);
} finally {
setLoading(false);
}
};
fetchData();
// Cleanup function (optional)
return () => {
// Cancel any pending requests or subscriptions
};
}, [url]); // Dependency array: effect runs only when 'url' changes
if (loading) return <p>Loading...</p>;
if (error) return <p>Error: {error.message}</p>;
if (!data) return <p>No data available.</p>;
return (
<pre>{JSON.stringify(data, null, 2)}</pre>
);
}
"""
### 4.2 Using Thunks with Redux
Thunks allow you to perform asynchronous operations in Redux actions.
"""javascript
// Example Redux Thunk Action
const fetchDataRequest = () => ({ type: 'FETCH_DATA_REQUEST' });
const fetchDataSuccess = (data) => ({ type: 'FETCH_DATA_SUCCESS', payload: data });
const fetchDataFailure = (error) => ({ type: 'FETCH_DATA_FAILURE', payload: error });
// Async action using Redux Thunk
const fetchData = (url) => {
return async (dispatch) => {
dispatch(fetchDataRequest());
try {
const response = await fetch(url);
if (!response.ok) {
throw new Error("HTTP error! status: ${response.status}");
}
const data = await response.json();
dispatch(fetchDataSuccess(data));
} catch (error) {
dispatch(fetchDataFailure(error.message));
}
};
};
// Usage in Component
import { useDispatch } from 'react-redux';
function DataFetchButton({ url }) {
const dispatch = useDispatch();
return (
dispatch(fetchData(url))}>
Fetch Data
);
}
"""
## 5. Testing State Management
Testing state management involves verifying that state transitions occur correctly and that side effects are handled properly.
* **Do This:**
* Write unit tests for reducers to verify state transitions.
* Use mock stores and actions to test components connected to Redux.
* Test side effects by mocking external dependencies.
* **Don't Do This:**
* Omit testing for state management logic.
* Write integration tests without proper unit testing.
### 5.1 Testing Reducers
"""javascript
// Reducer Test Example (Jest)
import counterReducer from './counterReducer'; // Assuming counterReducer.js
import { INCREMENT, DECREMENT } from './actions';
describe('counterReducer', () => {
it('should return the initial state', () => {
expect(counterReducer(undefined, {})).toEqual({ count: 0 });
});
it('should handle INCREMENT', () => {
expect(counterReducer({ count: 0 }, { type: INCREMENT })).toEqual({ count: 1 });
});
it('should handle DECREMENT', () => {
expect(counterReducer({ count: 1 }, { type: DECREMENT })).toEqual({ count: 0 });
});
});
"""
### 5.2 Testing React Components with Redux
"""javascript
// Component Test Example (React Testing Library and Redux Mock Store)
import React from 'react';
import { render, fireEvent } from '@testing-library/react';
import { Provider } from 'react-redux';
import configureStore from 'redux-mock-store';
import CounterComponent from './CounterComponent'; // Assuming CounterComponent.js
const mockStore = configureStore([]);
describe('CounterComponent', () => {
let store;
let component;
beforeEach(() => {
store = mockStore({ count: 0 });
store.dispatch = jest.fn(); // Mock dispatch function
component = render(
);
});
it('should display the initial count', () => {
expect(component.getByText('Count: 0')).toBeInTheDocument();
});
it('should dispatch increment action when increment button is clicked', () => {
fireEvent.click(component.getByText('Increment'));
expect(store.dispatch).toHaveBeenCalledWith({ type: 'INCREMENT' });
});
});
"""
## 6. Security Considerations for State Management
Security is a critical aspect of state management. Properly securing the state ensures that sensitive data is protected from unauthorized access and tampering.
* **Do This:**
* Protect sensitive data in the state with encryption.
* Validate data received from external sources before storing it in the state.
* Sanitize user input to prevent XSS.
* **Don't Do This:**
* Store sensitive data in plain text in the state.
* Trust data received from external sources without validation.
* Expose sensitive data in logs or error messages.
### 6.1 Data Validation
"""javascript
// Example Data Validation
const validateData = (data) => {
if (typeof data.email !== 'string' || !data.email.includes('@')) {
throw new Error('Invalid email format');
}
if (typeof data.age !== 'number' || data.age < 0 || data.age > 120) {
throw new Error('Invalid age');
}
return data;
};
// Usage in Reducer
const userReducer = (state = {}, action) => {
switch (action.type) {
case 'UPDATE_USER':
try {
const validatedData = validateData(action.payload);
return { ...state, ...validatedData };
} catch (error) {
console.error('Data validation failed:', error.message);
return state;
}
default:
return state;
}
};
"""
### 6.2 Encryption
Encrypting sensitive data ensures that even if the state is compromised, the data remains unreadable without the decryption key.
"""javascript
// Example Encryption (using CryptoJS)
import CryptoJS from 'crypto-js';
const encryptData = (data, key) => {
const encrypted = CryptoJS.AES.encrypt(JSON.stringify(data), key).toString();
return encrypted;
};
const decryptData = (encryptedData, key) => {
const bytes = CryptoJS.AES.decrypt(encryptedData, key);
try {
const decrypted = JSON.parse(bytes.toString(CryptoJS.enc.Utf8));
return decrypted;
} catch (e) {
console.error("Decryption error", e);
return null; // Or handle the error as appropriate
}
};
// Example usage
const sensitiveData = { creditCardNumber: '1234-5678-9012-3456' };
const encryptionKey = 'my-secret-key';
const encryptedData = encryptData(sensitiveData, encryptionKey);
console.log('Encrypted:', encryptedData);
const decryptedData = decryptData(encryptedData, encryptionKey);
console.log('Decrypted:', decryptedData);
"""
## 7. Optimizing Performance
Efficient state management is crucial for optimizing application performance, especially in complex applications with frequent state updates.
* **Do This:**
* Use memoization techniques to prevent unnecessary re-renders.
* Implement lazy loading for components that rely on large state objects.
* Batch state updates to minimize the number of renders.
* **Don't Do This:**
* Update the state unnecessarily.
* Cause components to re-render frequently with negligible impact.
### 7.1 Memoization
Memoization prevents re-renders by caching the results of expensive calculations or component renders.
"""javascript
import React, { useState, useMemo } from 'react';
function ExpensiveComponent({ data }) {
// Simulate an expensive computation
const computedValue = useMemo(() => {
console.log('Computing expensive value...');
// Complex calculation based on data
return data.map(item => item * 2).reduce((acc, val) => acc + val, 0);
}, [data]); // Only recompute if 'data' changes
return (
<p>Computed Value: {computedValue}</p>
);
}
function ParentComponent() {
const [count, setCount] = useState(0);
const data = [1, 2, 3, 4, 5]; // Static data
return (
setCount(count + 1)}>Increment Count
<p>Count: {count}</p>
{/*ExpensiveComponent only re-renders if "data" changes, not on count changes*/}
);
}
function MemoizedComponent({ data }) {
// Simulate a render-heavy component
console.log('Rendering MemoizedComponent...');
return <p>Data: {data.join(', ')}</p>;
}
// Memoize MemoizedComponent to prevent unnecessary re-renders
const OptimizedMemoizedComponent = React.memo(MemoizedComponent);
function ParentMemoComponent() {
const [count, setCount] = useState(0);
const data = [1, 2, 3, 4, 5];
return (
setCount(count + 1)}>Increment Count
<p>Count: {count}</p>
{/* MemoizedComponent only re-renders if its props change, not on count changes */}
);
}
"""
### 7.2 Batching Updates
Batching updates ensures that multiple state updates are grouped into a single render cycle.
"""javascript
import React, { useState } from 'react';
import { unstable_batchedUpdates } from 'react-dom'; // Available only in some React versions
function BatchUpdatesComponent() {
const [count1, setCount1] = useState(0);
const [count2, setCount2] = useState(0);
const updateBothCounts = () => {
unstable_batchedUpdates(() => {
// Both state updates are batched into a single render
setCount1(prevCount => prevCount + 1);
setCount2(prevCount => prevCount + 1);
});
};
return (
<p>Count 1: {count1}</p>
<p>Count 2: {count2}</p>
Update Both Counts
);
}
"""
These standards provide a comprehensive guide to managing state in a clean and maintainable way. By following these guidelines, developers can build robust, performant, and secure applications.
danielsogl
Created Mar 6, 2025
This guide explains how to effectively use .clinerules
with Cline, the AI-powered coding assistant.
The .clinerules
file is a powerful configuration file that helps Cline understand your project's requirements, coding standards, and constraints. When placed in your project's root directory, it automatically guides Cline's behavior and ensures consistency across your codebase.
Place the .clinerules
file in your project's root directory. Cline automatically detects and follows these rules for all files within the project.
# Project Overview project: name: 'Your Project Name' description: 'Brief project description' stack: - technology: 'Framework/Language' version: 'X.Y.Z' - technology: 'Database' version: 'X.Y.Z'
# Code Standards standards: style: - 'Use consistent indentation (2 spaces)' - 'Follow language-specific naming conventions' documentation: - 'Include JSDoc comments for all functions' - 'Maintain up-to-date README files' testing: - 'Write unit tests for all new features' - 'Maintain minimum 80% code coverage'
# Security Guidelines security: authentication: - 'Implement proper token validation' - 'Use environment variables for secrets' dataProtection: - 'Sanitize all user inputs' - 'Implement proper error handling'
Be Specific
Maintain Organization
Regular Updates
# Common Patterns Example patterns: components: - pattern: 'Use functional components by default' - pattern: 'Implement error boundaries for component trees' stateManagement: - pattern: 'Use React Query for server state' - pattern: 'Implement proper loading states'
Commit the Rules
.clinerules
in version controlTeam Collaboration
Rules Not Being Applied
Conflicting Rules
Performance Considerations
# Basic .clinerules Example project: name: 'Web Application' type: 'Next.js Frontend' standards: - 'Use TypeScript for all new code' - 'Follow React best practices' - 'Implement proper error handling' testing: unit: - 'Jest for unit tests' - 'React Testing Library for components' e2e: - 'Cypress for end-to-end testing' documentation: required: - 'README.md in each major directory' - 'JSDoc comments for public APIs' - 'Changelog updates for all changes'
# Advanced .clinerules Example project: name: 'Enterprise Application' compliance: - 'GDPR requirements' - 'WCAG 2.1 AA accessibility' architecture: patterns: - 'Clean Architecture principles' - 'Domain-Driven Design concepts' security: requirements: - 'OAuth 2.0 authentication' - 'Rate limiting on all APIs' - 'Input validation with Zod'
Add as custom prompt to Roocode you can completely replace the system prompt for this mode (aside from the role definition and custom instructions) by creating a file at .roo/system-prompt-codershortrules in your workspace. You are Roo, a highly skilled software engineer with extensive knowledge in many programming languages, frameworks, design patterns, and best practices. Use tools one at a time to complete tasks step-by-step. Wait for user confirmation after each tool use. Tools read_file: Read file contents. Use for analyzing code, text files, or configs. Output includes line numbers. Extracts text from PDFs and DOCX. Not for other binary files. Parameters: path (required) search_files: Search files in a directory using regex. Shows matches with context. Useful for finding code patterns or specific content. Parameters: path (required), regex (required), file_pattern (optional) list_files: List files and directories. Can be recursive. Don’t use to check if files you created exist; user will confirm. Parameters: path (required), recursive (optional) list_code_definition_names: List top-level code definitions (classes, functions, etc.) in a directory. Helps understand codebase structure. Parameters: path (required) apply_diff: Replace code in a file using a search and replace block. Must match existing content exactly. Use read_file first if unsure. Parameters: path (required), diff (required), start_line (required), end_line (required) Diff Format: text Wrap Copy <<<<<<< SEARCH [exact content] ======= [new content] >>>>>>> REPLACE write_to_file: Write full content to a file. Overwrites if exists, creates if not. MUST provide COMPLETE file content, not partial updates. MUST include app 3 parameters, path, content, and line_count Parameters: path (required), content (required), line_count (required) execute_command: Run CLI commands. Explain what the command does. Prefer complex commands over scripts. Commands run in the current directory. To run in a different directory, use cd path && command. Parameters: command (required) ask_followup_question: Ask the user a question to get more information. Use when you need clarification or details. Parameters: question (required) attempt_completion: Present the task result to the user. Optionally provide a CLI command to demo the result. Don’t use it until previous tool uses are confirmed successful. Parameters: result (required), command (optional) Tool Use Formatting IMPORTANT REPLACE tool_name with the tool you want to use, for example read_file. IMPORTANT REPLACE parameter_name with the parameter name, for example path. Format tool use with XML tags, e.g.: text Wrap Copy value1 value2 Guidelines Choose the right tool for the task. Use one tool at a time. Format tool use correctly. Wait for user confirmation after each tool use. Don’t assume tool success; wait for user feedback. Rules pass correct paths to tools. Don’t use ~ or $HOME. Tailor commands to the user's system. Prefer other editing tools over write_to_file for changes. Provide complete file content when using write_to_file. Don’t ask unnecessary questions; use tools to get information. Don’t be conversational; be direct and technical. Consider environment_details for context. ALWAYS replace tool_name, parameter_name, and parameter_value with actual values. Objective Break task into steps. Use tools to accomplish each step. Wait for user confirmation after each tool use. Use attempt_completion when task is complete.
# Angular v19+ Development Standards and Best Practices: A Comprehensive Guide ## 1. Core Architecture Guidelines We follow these core architectural patterns: - **Standalone Components:** All components, directives, and pipes are standalone by default (Angular v19+) - **Strong Typing:** Implement proper TypeScript types, interfaces, and models throughout the codebase - **Single Responsibility Principle (SRP):** Each component and service should have a single, well-defined responsibility - **Rule of One:** Keep files focused on a single concept or functionality - **Reactive State Management:** Use Signals for reactive and efficient state management - **Dependency Injection:** Utilize Angular's DI system for service management - **Lazy Loading:** Implement Deferrable Views and route-level lazy loading with `loadComponent` - **Directive Composition:** Use the Directive Composition API for reusable component behavior ## 2. Angular Style Guide Compliance Following the official Angular Style Guide: - **Code Size:** Limit files to 400 lines of code - **Single Purpose Files:** Define one entity (component, service, etc.) per file - **Naming Conventions:** Use consistent, descriptive names for all symbols - **Folder Structure:** Organize by feature-based folders - **File Separation:** Extract templates and styles to their own files for components - **Property Decoration:** Properly decorate input and output properties - **Component Selectors:** Use custom prefixes and kebab-case for component selectors (e.g., `app-feature-name`) ## 3. Input Signals For component inputs, follow these guidelines: - **Modern Signal-Based Inputs:** Use the `input()` function instead of `@Input()` decorator: ```typescript // Preferred value = input(0); // Creates InputSignal // Instead of @Input() value = 0; ``` - **Required Inputs:** Use `input.required()` for mandatory inputs: ```typescript value = input.required<number>(); ``` - **Input Transformations:** Apply transformations when needed: ```typescript disabled = input(false, { transform: booleanAttribute }); value = input(0, { transform: numberAttribute }); ``` - **Two-Way Binding:** Use model inputs for two-way binding: ```typescript value = model(0); // Creates a model input with change propagation // Update model values with .set() or .update() increment() { this.value.update(v => v + 1); } ``` - **Input Aliases:** Use aliases when necessary: ```typescript value = input(0, { alias: "sliderValue" }); ``` ## 4. Component Development When creating components: - **Naming Pattern:** Use consistent naming - `feature.type.ts` (e.g., `hero-list.component.ts`) - **Template Extraction:** Use separate `.html` files for non-trivial templates - **Style Extraction:** Place styles in separate `.css/.scss` files - **Signal-Based Inputs:** Use `input()` function for component inputs - **Two-Way Binding:** Use `model()` function for two-way binding - **Lifecycle Hooks:** Implement appropriate lifecycle hook interfaces (OnInit, OnDestroy, etc.) - **Element Selectors:** Keep components as elements (`selector: 'app-hero-detail'`) - **Logic Delegation:** Move complex logic to services - **Input Initialization:** Provide default values or mark as required - **Lazy Loading:** Use `@defer` for heavy components or features - **Error Handling:** Implement proper error boundaries with try-catch blocks - **Modern Control Flow:** Use `@if`, `@for`, `@switch` instead of structural directives - **State Representation:** Implement proper loading and error states - **Derived State:** Use `computed()` for derived state calculations ## 5. Styling Standards Our styling conventions: - **Component Encapsulation:** Use component-specific styles with proper encapsulation - **CSS Methodology:** Follow BEM methodology for CSS class naming when not using Angular Material - **Component Libraries:** Use Angular Material or other component libraries consistently - **Theming:** Implement proper theming and color systems - **Accessibility:** Follow a11y standards in all components - **Dark Mode:** Support dark mode where appropriate ## 6. Services and Dependency Injection For services and DI: - **Service Declaration:** Use `@Injectable()` decorator with `providedIn: 'root'` for singleton services - **Data Services:** Make data services responsible for API calls and data operations - **Error Handling:** Implement proper error handling in services - **DI Hierarchy:** Follow the Angular DI hierarchy appropriately - **Service Contracts:** Use interfaces to define service contracts - **Focused Responsibilities:** Keep services focused on specific tasks ## 7. Directives and Pipes When creating directives and pipes: - **Attribute Directives:** Use for presentation logic without templates - **Host Property:** Use the `host` property for bindings and listeners: ```typescript @Directive({ standalone: true, selector: '[appHighlight]', host: { // Host bindings '[class.highlighted]': 'isHighlighted', '[style.color]': 'highlightColor', // Host listeners '(click)': 'onClick($event)', '(mouseenter)': 'onMouseEnter()', '(mouseleave)': 'onMouseLeave()', // Static properties 'role': 'button', '[attr.aria-label]': 'ariaLabel' } }) ``` - **Selector Prefixes:** Use custom prefixes for directive selectors - **Pure Pipes:** Make pipes pure when possible for better performance - **Pipe Naming:** Follow naming conventions for pipes (camelCase) ## 8. State Management For state management: - **Signals:** Use Signals as the primary state management solution - **Component Inputs:** Use signal inputs with `input()` for component inputs - **Two-Way Binding:** Use model inputs with `model()` for two-way binding - **Local State:** Use writable signals with `signal()` for local component state - **Derived State:** Use computed signals with `computed()` for derived state - **Side Effects:** Use `effect()` for handling side effects - **Error Handling:** Implement proper error handling in signal computations - **Signal Conversion:** Use `toSignal()` and `toObservable()` for interoperability with RxJS ## 9. Testing Standards For testing: - **Test Coverage:** Maintain high test coverage for all components and services - **Unit Tests:** Write focused unit tests for services, pipes, and components - **Component Testing:** Test components with TestBed and component harnesses - **Mocking:** Use proper mocking techniques for dependencies - **Test Organization:** Follow AAA pattern (Arrange, Act, Assert) for test organization - **Test Naming:** Use descriptive test names that explain the expected behavior ## 10. Performance Optimization For optimal performance: - **Change Detection:** Use OnPush change detection strategy for components - **Lazy Loading:** Implement lazy loading for routes and components - **Virtual Scrolling:** Use virtual scrolling for long lists - **Memoization:** Memoize expensive computations - **Bundle Size:** Monitor and optimize bundle size - **Server-Side Rendering:** Implement SSR for improved initial load performance - **Web Workers:** Offload intensive operations to web workers when appropriate ## 11. Security Practices Security best practices: - **XSS Prevention:** Always sanitize user input - **CSRF Protection:** Implement CSRF tokens for forms - **Content Security Policy:** Use appropriate CSP headers - **Authentication:** Implement secure authentication practices - **Authorization:** Use proper authorization checks - **Sensitive Data:** Never expose sensitive data in client-side code ## 12. Accessibility Standards Accessibility requirements: - **ARIA Attributes:** Use appropriate ARIA attributes - **Keyboard Navigation:** Ensure all interactive elements are keyboard accessible - **Color Contrast:** Maintain proper color contrast ratios - **Screen Readers:** Test with screen readers - **Focus Management:** Implement proper focus management - **Alternative Text:** Provide alt text for images ## 13. Documentation Documentation standards: - **Code Comments:** Use JSDoc comments for public APIs - **README Files:** Maintain up-to-date README files for projects and major features - **API Documentation:** Document public APIs thoroughly - **Changelog:** Maintain a changelog for version updates - **Usage Examples:** Provide usage examples for components and services
# NgRx Signals Patterns This document outlines the state management patterns used in our Angular applications with NgRx Signals Store. ## 1. NgRx Signals Architecture - **Component-Centric Design:** Stores are designed around component requirements - **Hierarchical State:** State is organized in hierarchical structures - **Computed State:** Derived state uses computed values - **Declarative Updates:** State updates use patchState for immutability - **Store Composition:** Stores compose using features and providers - **Reactivity:** UIs build on automatic change detection - **Signal Interoperability:** Signals integrate with existing RxJS-based systems - **SignalMethod & RxMethod:** Use `signalMethod` for lightweight, signal-driven side effects; use `rxMethod` for Observable-based side effects and RxJS integration. When a service returns an Observable, always use `rxMethod` for side effects instead of converting to Promise or using async/await. ## 2. Signal Store Structure - **Store Creation:** The `signalStore` function creates stores - **Protected State:** Signal Store state is protected by default (`{ protectedState: true }`) - **State Definition:** Initial state shape is defined with `withState<StateType>({...})` - Root level state is always an object: `withState({ users: [], count: 0 })` - Arrays are contained within objects: `withState({ items: [] })` - **Dependency Injection:** Stores are injectable with `{ providedIn: 'root' }` or feature/component providers - **Store Features:** Built-in features (`withEntities`, `withHooks`, `signalStoreFeature`) handle cross-cutting concerns and enable store composition - **State Interface:** State interfaces provide strong typing - **Private Members:** Prefix all internal state, computed signals, and methods with an underscore (`_`). Ensure unique member names across state, computed, and methods. ```typescript withState({ count: 0, _internalCount: 0 }); withComputed(({ count, _internalCount }) => ({ doubleCount: computed(() => count() * 2), _doubleInternal: computed(() => _internalCount() * 2), })); ``` - **Member Integrity:** Store members have unique names across state, computed, and methods - **Initialization:** State initializes with meaningful defaults - **Collection Management:** The `withEntities` feature manages collections. Prefer atomic entity operations (`addEntity`, `updateEntity`, `removeEntity`, `setAllEntities`) over bulk state updates. Use `entityConfig` and `selectId` for entity identification. - **Entity Adapter Configuration:** Use `entityConfig` to configure the entity adapter for each store. Always specify the `entity` type, `collection` name, and a `selectId` function for unique entity identification. Pass the config to `withEntities<T>(entityConfig)` for strong typing and consistent entity management. ```typescript const userEntityConfig = entityConfig({ entity: type<User>(), collection: "users", selectId: (user: User) => user.id, }); export const UserStore = signalStore( { providedIn: "root" }, withState(initialState), withEntities(userEntityConfig), // ... ); ``` - **Custom Store Properties:** Use `withProps` to add static properties, observables, and dependencies. Expose observables with `toObservable`. ```typescript // Signal store structure example import { signalStore, withState, withComputed, withMethods, patchState, } from "@ngrx/signals"; import { withEntities, entityConfig } from "@ngrx/signals/entities"; import { computed, inject } from "@angular/core"; import { UserService } from "./user.service"; import { User } from "./user.model"; import { setAllEntities } from "@ngrx/signals/entities"; export interface UserState { selectedUserId: string | null; loading: boolean; error: string | null; } const initialState: UserState = { selectedUserId: null, loading: false, error: null, }; const userEntityConfig = entityConfig({ entity: type<User>(), collection: "users", selectId: (user: User) => user.id, }); export const UserStore = signalStore( { providedIn: "root" }, withState(initialState), withEntities(userEntityConfig), withComputed(({ usersEntities, usersEntityMap, selectedUserId }) => ({ selectedUser: computed(() => { const id = selectedUserId(); return id ? usersEntityMap()[id] : undefined; }), totalUserCount: computed(() => usersEntities().length), })), withMethods((store, userService = inject(UserService)) => ({ loadUsers: rxMethod<void>( pipe( switchMap(() => { patchState(store, { loading: true, error: null }); return userService.getUsers().pipe( tapResponse({ next: (users) => patchState(store, setAllEntities(users, userEntityConfig), { loading: false, }), error: () => patchState(store, { loading: false, error: "Failed to load users", }), }), ); }), ), ), selectUser(userId: string | null): void { patchState(store, { selectedUserId: userId }); }, })), ); ``` ## 3. Signal Store Methods - **Method Definition:** Methods are defined within `withMethods` - **Dependency Injection:** The `inject()` function accesses services within `withMethods` - **Method Organization:** Methods are grouped by domain functionality - **Method Naming:** Methods have clear, action-oriented names - **State Updates:** `patchState(store, newStateSlice)` or `patchState(store, (currentState) => newStateSlice)` updates state immutably - **Async Operations:** Methods handle async operations and update loading/error states - **Computed Properties:** `withComputed` defines derived state - **RxJS Integration:** `rxMethod` integrates RxJS streams. Use `rxMethod` for all store methods that interact with Observable-based APIs or services. Avoid using async/await with Observables in store methods. ```typescript // Signal store method patterns import { signalStore, withState, withMethods, patchState } from "@ngrx/signals"; import { inject } from "@angular/core"; import { TodoService } from "./todo.service"; import { Todo } from "./todo.model"; export interface TodoState { todos: Todo[]; loading: boolean; } export const TodoStore = signalStore( { providedIn: "root" }, withState<TodoState>({ todos: [], loading: false }), withMethods((store, todoService = inject(TodoService)) => ({ addTodo(todo: Todo): void { patchState(store, (state) => ({ todos: [...state.todos, todo], })); }, loadTodosSimple: rxMethod<void>( pipe( switchMap(() => { patchState(store, { loading: true }); return todoService.getTodos().pipe( tapResponse({ next: (todos) => patchState(store, { todos, loading: false }), error: () => patchState(store, { loading: false }), }), ); }), ), ), })), ); ``` ## 4. Entity Management - **Entity Configuration:** Entity configurations include ID selectors - **Collection Operations:** Entity operations handle CRUD operations - **Entity Relationships:** Computed properties manage entity relationships - **Entity Updates:** Prefer atomic entity operations (`addEntity`, `updateEntity`, `removeEntity`, `setAllEntities`) over bulk state updates. Use `entityConfig` and `selectId` for entity identification. ```typescript // Entity management patterns const userEntityConfig = entityConfig({ entity: type<User>(), collection: "users", selectId: (user: User) => user.id, }); export const UserStore = signalStore( withEntities(userEntityConfig), withMethods((store) => ({ addUser: signalMethod<User>((user) => { patchState(store, addEntity(user, userEntityConfig)); }), updateUser: signalMethod<{ id: string; changes: Partial<User> }>( ({ id, changes }) => { patchState(store, updateEntity({ id, changes }, userEntityConfig)); }, ), removeUser: signalMethod<string>((id) => { patchState(store, removeEntity(id, userEntityConfig)); }), setUsers: signalMethod<User[]>((users) => { patchState(store, setAllEntities(users, userEntityConfig)); }), })), ); ``` ## 5. Component Integration ### Component State Access - **Signal Properties:** Components access signals directly in templates - **OnPush Strategy:** Signal-based components use OnPush change detection - **Store Injection:** Components inject store services with the `inject` function - **Default Values:** Signals have default values - **Computed Values:** Components derive computed values from signals - **Signal Effects:** Component effects handle side effects ```typescript // Component integration patterns @Component({ standalone: true, imports: [UserListComponent], template: ` @if (userStore.users().length > 0) { <app-user-list [users]="userStore.users()"></app-user-list> } @else { <p>No users loaded yet.</p> } <div>Selected user: {{ selectedUserName() }}</div> `, changeDetection: ChangeDetectionStrategy.OnPush, }) export class UsersContainerComponent implements OnInit { readonly userStore = inject(UserStore); selectedUserName = computed(() => { const user = this.userStore.selectedUser(); return user ? user.name : "None"; }); constructor() { effect(() => { const userId = this.userStore.selectedUserId(); if (userId) { console.log(`User selected: ${userId}`); } }); } ngOnInit() { this.userStore.loadUsers(); } } ``` ### Signal Store Hooks - **Lifecycle Hooks:** The `withHooks` feature adds lifecycle hooks to stores - **Initialization:** The `onInit` hook initializes stores - **Cleanup:** The `onDestroy` hook cleans up resources - **State Synchronization:** Hooks synchronize state between stores ```typescript // Signal store hooks patterns export const UserStore = signalStore( withState<UserState>({ /* initial state */ }), withMethods(/* store methods */), withHooks({ onInit: (store) => { // Initialize the store store.loadUsers(); // Return cleanup function if needed return () => { // Cleanup code }; }, }), ); ``` ## 6. Advanced Signal Patterns ### Signal Store Features - **Feature Creation:** The `signalStoreFeature` function creates reusable features - **Generic Feature Types:** Generic type parameters enhance feature reusability ```typescript function withMyFeature<T>(config: Config<T>) { return signalStoreFeature(/*...*/); } ``` - **Feature Composition:** Multiple features compose together - **Cross-Cutting Concerns:** Features handle logging, undo/redo, and other concerns - **State Slices:** Features define and manage specific state slices ```typescript // Signal store feature patterns export function withUserFeature() { return signalStoreFeature( withState<UserFeatureState>({ /* feature state */ }), withComputed((state) => ({ /* computed properties */ })), withMethods((store) => ({ /* methods */ })), ); } // Using the feature export const AppStore = signalStore( withUserFeature(), withOtherFeature(), withMethods((store) => ({ /* app-level methods */ })), ); ``` ### Signals and RxJS Integration - **Signal Conversion:** `toSignal()` and `toObservable()` convert between Signals and Observables - **Effects:** Angular's `effect()` function reacts to signal changes - **RxJS Method:** `rxMethod<T>(pipeline)` handles Observable-based side effects. Always prefer `rxMethod` for Observable-based service calls in stores. Do not convert Observables to Promises for store logic. - Accepts input values, Observables, or Signals - Manages subscription lifecycle automatically - **Reactive Patterns:** Signals combine with RxJS for complex asynchronous operations ```typescript // Signal and RxJS integration patterns import { signalStore, withState, withMethods, patchState } from "@ngrx/signals"; import { rxMethod } from "@ngrx/signals/rxjs-interop"; import { tapResponse } from "@ngrx/operators"; import { pipe, switchMap } from "rxjs"; import { inject } from "@angular/core"; import { HttpClient } from "@angular/common/http"; import { User } from "./user.model"; export interface UserState { users: User[]; loading: boolean; error: string | null; } export const UserStore = signalStore( { providedIn: "root" }, withState({ users: [], loading: false, error: null }), withMethods((store, http = inject(HttpClient)) => ({ loadUsers: rxMethod<void>( pipe( switchMap(() => { patchState(store, { loading: true, error: null }); return http.get<User[]>("/api/users").pipe( tapResponse({ next: (users) => patchState(store, { users, loading: false }), error: () => patchState(store, { loading: false, error: "Failed to load users", }), }), ); }), ), ), })), ); ``` ### Signal Method for Side Effects The `signalMethod` function manages side effects driven by Angular Signals within Signal Store: - **Input Flexibility:** The processor function accepts static values or Signals - **Automatic Cleanup:** The underlying effect cleans up when the store is destroyed - **Explicit Tracking:** Only the input signal passed to the processor function is tracked - **Lightweight:** Smaller bundle size compared to `rxMethod` ```typescript // Signal method patterns import { signalStore, withState, withMethods, patchState } from '@ngrx/signals'; import { signalMethod } from '@ngrx/signals'; import { inject } from '@angular/core'; import { Logger } from './logger'; interface UserPreferencesState { theme: 'light' | 'dark'; sendNotifications: boolean; const initialState: UserPreferencesState = { theme: 'light', sendNotifications: true, }; export const PreferencesStore = signalStore( { providedIn: 'root' }, withState(initialState), withProps(() => ({ logger: inject(Logger), })); withMethods((store) => ({ setSendNotifications(enabled: boolean): void { patchState(store, { sendNotifications: enabled }); }, // Signal method reacts to theme changes logThemeChange: signalMethod<'light' | 'dark'>((theme) => { store.logger.log(`Theme changed to: ${theme}`); }), setTheme(newTheme: 'light' | 'dark'): void { patchState(store, { theme: newTheme }); }, })), ); ``` ## 7. Custom Store Properties - **Custom Properties:** The `withProps` feature adds static properties, observables, and dependencies - **Observable Exposure:** `toObservable` within `withProps` exposes state as observables ```typescript withProps(({ isLoading }) => ({ isLoading$: toObservable(isLoading), })); ``` - **Dependency Grouping:** `withProps` groups dependencies for use across store features ```typescript withProps(() => ({ booksService: inject(BooksService), logger: inject(Logger), })); ``` ## 8. Project Organization ### Store Organization - **File Location:** Store definitions (`*.store.ts`) exist in dedicated files - **Naming Convention:** Stores follow the naming pattern `FeatureNameStore` - **Model Co-location:** State interfaces and models exist near store definitions - **Provider Functions:** Provider functions (`provideFeatureNameStore()`) encapsulate store providers ```typescript // Provider function pattern import { Provider } from "@angular/core"; import { UserStore } from "./user.store"; export function provideUserSignalStore(): Provider { return UserStore; } ``` ### Store Hierarchy - **Parent-Child Relationships:** Stores have clear relationships - **State Sharing:** Related components share state - **State Ownership:** Each state slice has a clear owner - **Store Composition:** Complex UIs compose multiple stores
# API Integration Standards for Clean Code This document outlines coding standards and best practices for integrating with backend services and external APIs within the Clean Code framework. It emphasizes readability, maintainability, performance, and security. These standards are aimed at guiding developers and AI coding assistants in producing high-quality, robust, and scalable integrations. ## 1. API Integration Principles and Clean Code API integration, when approached with Clean Code principles in mind, becomes significantly more manageable and less prone to errors. This section explores the core principles and their applications in the context of API interactions. * **Single Responsibility Principle (SRP):** A class or module should have one, and only one, reason to change. For API integration, this means separating the API client logic (responsible for making requests and handling responses) from the business logic that uses the data. * **Do This:** Create dedicated classes or modules for interacting with specific APIs, encapsulating all the API-related logic within them. * **Don't Do This:** Mix API calls directly within business logic classes or functions. This makes testing and maintenance difficult. * **Why:** SRP ensures that a change in the API (e.g., endpoint change, data format update) only requires modification in the API client module, not the entire application. * **Open/Closed Principle (OCP):** Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification. This principle applies to API integration by allowing new API features or versions to be adopted without modifying existing code that uses the API. * **Do This:** Use abstract classes or interfaces to define a contract for API clients. Implementations can then be created for different API versions or services. Utilize design patterns such as Strategy or Template Method. * **Don't Do This:** Directly modify existing API client code to accommodate new API features. * **Why:** OCP ensures that changes to the API don't introduce regressions in existing functionality. * **Liskov Substitution Principle (LSP):** Subtypes must be substitutable for their base types without altering the correctness of the program. This is relevant when using polymorphism with API clients. * **Do This:** Ensure that any derived API client classes adhere to the contract defined by the base class or interface. If a method implemented in a sub-class modifies behavior in an unexpected way, it violates LSP. * **Don't Do This:** Create API client subclasses that fundamentally change the behavior of the base class's methods. * **Why:** LSP ensures that you can replace one API client implementation with another without causing unexpected errors. * **Interface Segregation Principle (ISP):** Clients should not be forced to depend on methods they do not use. In the API realm, this translates to creating specific interfaces for different API functionalities, catering to the needs of different parts of the application. * **Do This:** Define multiple, smaller interfaces tailored to specific use cases, rather than a single large interface for the entire API. * **Don't Do This:** Force clients to implement methods they don't need, leading to bloated and confusing implementations. * **Why:** ISP promotes loose coupling and reduces the risk of unintended side effects when API contracts change. * **Dependency Inversion Principle (DIP):** High-level modules should not depend on low-level modules. Both should depend on abstractions. Abstractions should not depend on details. Details should depend on abstractions. In API integration, this means that business logic should depend on interfaces for API clients, not on concrete implementations. That allows easy swapping of implementations for testing, changing providers, or other needs. * **Do This:** Inject API client interfaces into classes that need to consume the API. * **Don't Do This:** Directly instantiate API client classes within business logic components. * **Why:** DIP promotes loose coupling, making testing easier, and allowing you to switch API providers without impacting the rest of your system. Dependency Injection (DI) frameworks are invaluable here. ## 2. Connecting with Backend Services and External APIs This section covers the practical aspects of connecting to APIs, including error handling, data transformation, and authentication. ### 2.1 Selecting an HTTP Client * **Standard:** Use a robust and well-maintained HTTP client library. Consider "aiohttp" (async) or "requests" (sync) for Python. For Javascript consider "axios" or the native "fetch" API. * **Do This:** Choose a library that supports features like connection pooling, automatic retries, timeouts, and request/response interceptors, and proper TLS/SSL verification. * **Don't Do This:** Write your own HTTP client or use a rudimentary library that lacks essential features. * **Why:** Using a mature HTTP client library simplifies development and reduces the risk of introducing bugs or security vulnerabilities. """python import requests def get_data_from_api(url): try: response = requests.get(url, timeout=10) # added timeout response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx) return response.json() except requests.exceptions.RequestException as e: print(f"Error communicating with API: {e}") return None api_url = "https://example.com/api/data" data = get_data_from_api(api_url) if data: print(data) """ ### 2.2 Error Handling * **Standard:** Implement robust error handling to gracefully handle API failures. Use "try-except" blocks, check response codes, and log errors appropriately. Implement retry mechanisms with exponential backoff for transient errors. * **Do This:** Wrap API calls in "try-except" blocks to catch potential exceptions (e.g., network errors, timeouts, invalid responses). Use "response.raise_for_status()" to check for HTTP errors, making special consideration for rate limiting. Log the complete error message, request URL, and any relevant context. * **Don't Do This:** Ignore errors or simply print error messages without proper logging and handling. * **Why:** Proper error handling prevents application crashes, provides valuable debugging information, and ensures a better user experience. """python import requests import time import logging logging.basicConfig(level=logging.INFO) def get_data_from_api(url, max_retries=3, backoff_factor=2): # retry mechanism retries = 0 while retries < max_retries: try: response = requests.get(url, timeout=10) response.raise_for_status() return response.json() except requests.exceptions.RequestException as e: logging.error(f"Attempt {retries + 1} failed: {e}") retries += 1 time.sleep(backoff_factor ** retries) # Exponential backoff logging.error(f"Failed to retrieve data from {url} after {max_retries} attempts") return None """ ### 2.3 Data Transformation * **Standard:** Decouple the API's data format from your application's data model. Use data transfer objects (DTOs) or similar mechanisms to map the API response to your internal representation. * **Do This:** Create dedicated classes or functions to transform API responses into your application's data structures. Centralize this mapping logic to simplify changes. Use validation libraries like Pydantic (for Python) for structural validation and automated type conversion. * **Don't Do This:** Directly use the API's data format throughout your application. This tightly couples your code to the API and makes it difficult to adapt to changes. * **Why:** Data transformation ensures that your application remains independent of the specific API's data format, improving maintainability and flexibility. """python from pydantic import BaseModel import requests class User(BaseModel): # pydantic model id: int name: str email: str def get_user_from_api(user_id: int) -> User | None: url = f"https://example.com/api/users/{user_id}" try: response = requests.get(url) response.raise_for_status() user_data = response.json() return User(**user_data) # Validate and convert using Pydantic except requests.exceptions.RequestException as e: print(f"Error: {e}") return None user = get_user_from_api(1) if user: print(f"User name: {user.name}") """ ### 2.4 Authentication and Authorization * **Standard:** Implement secure authentication and authorization mechanisms according to the API's requirements (API Keys, OAuth 2.0, JWT). Store secrets securely using environment variables or dedicated secret management tools. * **Do This:** Use a dedicated library for handling authentication protocols. Store API keys and secrets securely (e.g., using environment variables or a vault). Properly handle token refresh flows in OAuth 2.0 when using access tokens with limited lifetimes. Use HTTPS for all API communication. * **Don't Do This:** Hardcode API keys or secrets in your code. Skip SSL/TLS verification. Store cryptographic keys anywhere in your source repository. * **Why:** Secure authentication and authorization protect your application and the API from unauthorized access. """python import os import requests API_KEY = os.environ.get("MY_API_KEY") # Store in environment variable def call_api_with_auth(url): headers = {"Authorization": f"Bearer {API_KEY}"} # authorization header response = requests.get(url, headers=headers) response.raise_for_status() return response.json() """ ### 2.5 Rate Limiting * **Standard:** Understand and respect the API's rate limits. Implement mechanisms to avoid exceeding these limits, such as caching, throttling, and exponential backoff with jitter. * **Do This:** Check the API's documentation for rate limits. Implement a throttling mechanism to control the number of requests per unit of time. Cache API responses when appropriate (especially for frequently accessed data that doesn't change often). Handle "429 Too Many Requests" errors gracefully using exponential backoff and jitter. * **Don't Do This:** Ignore rate limits and bombard the API with requests. This can lead to temporary or permanent blocking. * **Why:** Respecting rate limits ensures fair usage of the API and prevents your application from being blocked. """python import time import requests import logging import random logging.basicConfig(level=logging.INFO) def call_api_with_rate_limiting(url, delay=1): try: response = requests.get(url) response.raise_for_status() return response.json() except requests.exceptions.HTTPError as e: if e.response.status_code == 429: retry_after = int(e.response.headers.get("Retry-After", 60)) # get from header jitter = random.uniform(0, 1) # add jitter wait_time = retry_after + jitter logging.warning(f"Rate limit exceeded. Waiting {wait_time:.2f} seconds.") time.sleep(wait_time) return call_api_with_rate_limiting(url) # Recursive call! else: raise # Re-raise the exception for other errors time.sleep(delay) # delay for a specified time return response.json() """ ## 3. Design Patterns for API Integration Several design patterns can help improve the structure and maintainability of API integration code. ### 3.1 Facade Pattern * **Standard:** Use a facade to provide a simplified interface to a complex API. This hides the underlying complexity and makes it easier for clients to use the API. * **Do This:** Create a facade class that encapsulates the API client and provides a simple, high-level interface for common operations. * **Don't Do This:** Expose the raw API client directly to clients. This exposes unnecessary complexity and makes it harder to adapt to API changes. * **Why:** The Facade pattern shields calling code from the complexities of direct API interaction. """python import requests class WeatherAPIClient: BASE_URL = "https://api.weatherapi.com/v1" def __init__(self, api_key): self.api_key = api_key def get_current_weather(self, city): url = f"{self.BASE_URL}/current.json?key={self.api_key}&q={city}" response = requests.get(url) response.raise_for_status() return response.json() class WeatherFacade: def __init__(self, api_key): self.api_client = WeatherAPIClient(api_key) # encapsulates API def get_temperature(self, city): data = self.api_client.get_current_weather(city) return data["current"]["temp_c"] # simplified interface # Usage: weather_facade = WeatherFacade("YOUR_API_KEY") temperature = weather_facade.get_temperature("London") print(f"Temperature in London: {temperature}°C") """ ### 3.2 Adapter Pattern * **Standard:** Use an adapter to convert the interface of an API client class into another interface that clients expect. This is useful when integrating with APIs that have different interfaces. * **Do This:** Create an adapter class that implements the desired interface and delegates calls to the API client. Use this pattern to normalize interfaces from different API endpoint calls to make them consistent for calling code. * **Don't Do This:** Modify the API client class directly to fit the desired interface (violates OCP). * **Why:** The Adapter pattern allows integrating disparate API interfaces or data models into a common format. """python class OldAPI: def fetch_data(self): return {"old_data": "value"} class NewAPIInterface: def get_data(self): raise NotImplementedError class OldAPIToNewAPIAdapter(NewAPIInterface): # Adapter class def __init__(self, old_api): self.old_api = old_api def get_data(self): old_data = self.old_api.fetch_data() return {"new_data": old_data["old_data"]} old_api = OldAPI() adapter = OldAPIToNewAPIAdapter(old_api) new_data = adapter.get_data() print(new_data) # Output: {'new_data': 'value'} """ ### 3.3 Strategy Pattern * **Standard:** If you have multiple ways to call an API (different authentication methods, different endpoints for the same functionality, etc.) use the Strategy pattern to encapsulate each approach into a separate strategy class. This allows you to easily switch between strategies at runtime. * **Do This:** Define a common interface for all strategies. Create concrete strategy classes for each approach. Inject the desired strategy into the class that needs to call the API. * **Don't Do This:** Use conditional statements to switch between different approaches. This makes the code harder to read and maintain. * **Why:** Provides implementation flexibility and facilitates switching strategies without modifying the core logic. """python import requests class AuthStrategy: def apply_auth(self, request): raise NotImplementedError() class APIKeyAuth(AuthStrategy): def __init__(self, api_key): self.api_key = api_key def apply_auth(self, request): request.headers["X-API-Key"] = self.api_key return request class OAuth2Auth(AuthStrategy): def __init__(self, token): self.token = token def apply_auth(self, request): request.headers["Authorization"] = f"Bearer {self.token}" return request class APIClient: def __init__(self, auth_strategy: AuthStrategy): self.auth_strategy = auth_strategy def get(self, url): request = requests.Request("GET", url) prepared_request = self.auth_strategy.apply_auth(request) # apply strategy session = requests.Session() response = session.send(prepared_request.prepare()) response.raise_for_status() return response.json() # Usage: api_key_auth = APIKeyAuth("your_api_key") api_client_api_key = APIClient(api_key_auth) data = api_client_api_key.get("https://example.com/api/data") print(data) oauth2_auth = OAuth2Auth("your_oauth_token") api_client_oauth = APIClient(oauth2_auth) data = api_client_oauth.get("https://example.com/api/data") print(data) """ ## 4. Technology-Specific Considerations ### 4.1 Python * **Asyncio:** Utilize "asyncio" and "aiohttp" for asynchronous API calls to improve concurrency and performance in I/O-bound applications. * **Type Hints:** Use type hints extensively to improve code readability and catch type-related errors early. * **Pydantic:** Use Pydantic for data validation and serialization/deserialization of API requests and responses. * **Requests Library:** Using "requests" may block synchronous code so consider using threading to avoid impacting performance or using Asyncio libraries. ### 4.2 JavaScript * **Fetch API/Axios:** Use "fetch" (native) or "axios" (library) for making HTTP requests. Axios is typically preferred for providing additional error handling and legacy browser support. * **Async/Await:** Leverage "async/await" syntax for asynchronous API calls to improve code readability and maintainability. * **Typescript:** Enable TypeScript support to statically type check code during development, ensuring that API requests are correctly constructed, and API responses are properly handled. * **Node.js:** Utilize Node.js to process large amounts of asynchronous requests. ## 5. Testing API Integrations * **Standard:** Thoroughly test API integrations to ensure correctness, reliability, and performance. Use mock APIs or stubs to isolate your code during testing. * **Do This:** Write unit tests for API client classes, mocking the HTTP client to control the API responses. Use integration tests to verify the end-to-end flow, including actual API calls to a test environment. Consider contract testing to ensure that your API client adheres to the API's contract. * **Don't Do This:** Skip testing API integrations or rely solely on manual testing. This can lead to unexpected errors and regressions. * **Why:** Testing API integrations ensures that your application works correctly with the API and protects against changes in the API. """python import unittest from unittest.mock import patch import requests class MockResponse: def __init__(self, json_data, status_code): self.json_data = json_data self.status_code = status_code def json(self): return self.json_data def raise_for_status(self): if self.status_code >= 400: raise requests.exceptions.HTTPError("Error") class APITest(unittest.TestCase): @patch('requests.get') # Mock the requests.get method def test_get_data_from_api_success(self, mock_get): mock_response = MockResponse({"key": "value"}, 200) mock_get.return_value = mock_response from your_module import get_data_from_api # Import your function here data = get_data_from_api("http://example.com/api") self.assertEqual(data, {"key": "value"}) @patch('requests.get') def test_get_data_from_api_failure(self, mock_get): mock_response = MockResponse(None, 500) mock_response.raise_for_status = unittest.mock.Mock(side_effect=requests.exceptions.HTTPError("Error")) mock_get.return_value = mock_response from your_module import get_data_from_api data = get_data_from_api("http://example.com/api") self.assertIsNone(data) """ ## 6. Documentation * **Standard:** Document all API integrations, including the API's purpose, authentication methods, request/response formats, and error handling strategies. * **Do This:** Use docstrings to document API client classes and methods. Create separate documents or wikis to describe the API integration in more detail. Use tooling like Swagger or OpenAPI to document APIs and their integration points. * **Don't Do This:** Leave API integrations undocumented. This makes it difficult for others (and yourself in the future) to understand and maintain the code. * **Why:** Clear documentation is essential for maintainability and collaboration. ## 7. Security Considerations * **Standard:** Prioritize security when integrating with APIs. Enforce HTTPS, validate input, sanitize output, and protect against common web vulnerabilities. * **Do This:** Always use HTTPS for API communication. Validate all input data to prevent injection attacks. Sanitize all output data to prevent cross-site scripting (XSS) attacks. Follow the principle of least privilege when configuring API access controls. Protect against CSRF attacks. * **Don't Do This:** Store sensitive data in plain text. Trust user input without validation. Disable security features. * **Why:** Security vulnerabilities can expose sensitive data and compromise your application. By adhering to these standards, developers can ensure that API integrations are clean, maintainable, performant, and secure, aligning with the core principles of Clean Code.
# Deployment and DevOps Standards for Clean Code This document outlines the Clean Code principles specifically applied to Deployment and DevOps. It focuses on creating maintainable, reliable, and secure deployment pipelines and infrastructure-as-code. ## 1. Build Processes and CI/CD ### 1.1. Standard: Automate Everything **Do This:** Fully automate build, test, and deployment processes. **Don't Do This:** Rely on manual steps or inconsistent scripts. **Why:** Automation reduces human error, improves consistency, and accelerates feedback loops. This aligns perfectly with Clean Code's emphasis on maintainability and reducing complexity. **Example (GitHub Actions):** """yaml name: CI/CD Pipeline on: push: branches: [ main ] pull_request: branches: [ main ] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Set up JDK 17 uses: actions/setup-java@v3 with: java-version: '17' distribution: 'temurin' - name: Grant execute permission for gradlew run: chmod +x gradlew - name: Build with Gradle run: ./gradlew build test: runs-on: ubuntu-latest needs: build steps: - uses: actions/checkout@v3 - name: Set up JDK 17 uses: actions/setup-java@v3 with: java-version: '17' distribution: 'temurin' - name: Grant execute permission for gradlew run: chmod +x gradlew - name: Run Tests with Gradle run: ./gradlew test deploy: needs: test runs-on: ubuntu-latest if: github.ref == 'refs/heads/main' # Only deploy from main branch steps: - uses: actions/checkout@v3 - name: Configure AWS Credentials uses: aws-actions/configure-aws-credentials@v1 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: us-east-1 - name: Deploy to AWS Elastic Beanstalk run: | zip -r deployment.zip * aws s3 cp deployment.zip s3://your-deployment-bucket/ aws elasticbeanstalk update-environment --environment-name YourEnvironmentName --version-label ${{ github.sha }} """ **Anti-Pattern:** Manually deploying code to production servers. This is inherently error-prone and makes rollbacks difficult. ### 1.2. Standard: Use Infrastructure as Code (IaC) **Do This:** Define infrastructure using code (e.g., Terraform, CloudFormation, Pulumi). **Don't Do This:** Manually configure infrastructure through web consoles. **Why:** IaC provides version control, repeatability, and auditability for infrastructure. This is analogous to version control for source code, a core Clean Code principle. **Example (Terraform):** """terraform terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" } } required_version = ">= 1.4" } provider "aws" { region = "us-east-1" } resource "aws_instance" "example" { ami = "ami-0c55b43ad4bf5336f" # Replace with a valid AMI ID instance_type = "t2.micro" tags = { Name = "CleanCodeExampleServer" } } output "public_ip" { value = aws_instance.example.public_ip } """ Explanation: This Terraform configuration defines an AWS EC2 instance. The "aws_instance" resource block describes the instance details like AMI (Amazon Machine Image) and instance type. The "output" block exposes the public IP of the created instance. **Modern Approches:** * Using Terraform Cloud for state management and collaboration. * Implementing policy-as-code using tools like Sentinel or OPA (Open Policy Agent) to enforce compliance. ### 1.3. Standard: Implement Continuous Integration **Do This:** Integrate changes frequently (e.g., multiple times per day). **Don't Do This:** Let branches diverge for long periods before merging. **Why:** Frequent integration reduces merge conflicts, exposes integration issues early, and promotes a culture of collaboration. This reflects Clean Code's emphasis on simplicity and reducing coupling. **Specifics for Clean Code:** * Ensure CI pipelines run static analysis tools that enforce Clean Code principles (e.g., linters, code formatters). * Run automated tests (unit, integration, end-to-end) on every commit to ensure code quality. **Example (GitHub Actions with Linting):** """yaml name: CI Pipeline with Linting on: push: branches: [ main ] pull_request: branches: [ main ] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Set up Python 3.9 uses: actions/setup-python@v3 with: python-version: '3.9' - name: Install dependencies run: | python -m pip install --upgrade pip pip install flake8 pytest - name: Lint with flake8 run: | flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics - name: Test with pytest run: pytest """ This GitHub Actions workflow performs linting using "flake8" before running tests. It checks for code style issues and enforces a maximum complexity, improving code quality. **Anti-Pattern:** Not having automated linting in your CI pipeline. This allows for code that doesn't adhere to style guidelines and can lead to maintenance issues. ### 1.4. Standard: Implement Continuous Delivery/Deployment **Do This:** Automate the release process to production. Choose the appropriate strategy (Continuous Delivery vs. Continuous Deployment) based on business needs and risk tolerance. **Don't Do This:** Require manual approvals or long release cycles for every deployment. **Why:** CD reduces the time to market for new features, enables faster feedback loops, and improves the overall agility of the development process. **Technology Specific (Blue/Green Deployment with AWS):** This involves creating two identical environments ("blue" and "green"), deploying the new version to the "green" environment, testing it, and then switching traffic from the "blue" (old) to the "green" (new) environment. **Advantages:** Immediate rollback capability, minimal downtime. **Example (Simplified AWS CodeDeploy for Blue/Green):** (Note: This is a simplified example and requires more setup in a real-world scenario, including load balancer configuration and environment specifics.) 1. **Create two Elastic Beanstalk environments (Blue and Green).** 2. **Use CodeDeploy to deploy the application to the Green environment.** CodeDeploy handles the deployment process, including installing dependencies and starting the application. 3. **After successful deployment and testing, switch the Elastic Load Balancer (ELB) to point to the Green environment.** **Clean Code Implications:** * Code that is well-tested and follows Clean Code principles is easier to deploy safely with CI/CD. * Automated rollbacks are crucial. Having clean, modular code makes it much easier to quickly revert to a previous state if issues arise after deployment. ## 2. Production Considerations ### 2.1. Standard: Monitoring and Alerting **Do This:** Implement comprehensive monitoring and alerting for all production systems. **Don't Do This:** Wait for users to report issues. **Why:** Monitoring provides visibility into the health and performance of applications, allowing for proactive identification and resolution of issues. This aligns with Clean Code's principle of error handling, but extends it to the operational environment. **Example (Prometheus and Grafana):** 1. **Instrument your code with Prometheus metrics:** """python from prometheus_client import start_http_server, Summary, Gauge import random import time # Create a metric to track time spent and requests made per endpoint # The Summary metric tracks the count, sum, and quantiles request_processing_time = Summary('request_processing_seconds', 'Time spent processing request') g = Gauge('my_python_service_current_connections', 'Number of current connections') # Decorate function with metric. @request_processing_time.time() def process_request(request): """A dummy function that takes some time.""" time.sleep(random.random()) # Simulate request processing g.inc() time.sleep(0.2) g.dec() return "Request processed" if __name__ == '__main__': # Start up the server to expose the metrics. start_http_server(8000) # Generate some requests. while True: process_request("Test Request") """ 2. **Configure Prometheus to scrape metrics from your application:** This involves adding a job configuration to "prometheus.yml". 3. **Create Grafana dashboards to visualize the metrics:** Grafana can query Prometheus and display the collected data in graphs and charts. Dashboards can show request latency, error rates, and resource utilization. **Clean Code Specifics:** * Use meaningful metric names and labels (e.g., "http_request_duration_seconds{path="/users",method="GET"}"). * Log structured data (e.g., JSON) that can be easily processed by monitoring tools. **Anti-Pattern:** Relying solely on application logs for monitoring. Logs are important, but they are not a substitute for dedicated monitoring tools. ### 2.2. Standard: Logging **Do This:** Implement consistent and structured logging. **Don't Do This:** Use unstructured or inconsistent log messages. **Why:** Logging provides valuable information for debugging, auditing, and security analysis. Structured logging (e.g., JSON format) makes it easier to search, filter, and analyze log data. This supports the Clean Code principle of making code understandable and debuggable. **Example (Python with Structured Logging):** """python import logging import json # Configure logging logging.basicConfig(level=logging.INFO, format='%(message)s') logger = logging.getLogger(__name__) def process_data(data): try: # Simulate some processing result = data["value"] * 2 log_data = { "event": "Data processed", "input": data, "result": result, "status": "success" } logger.info(json.dumps(log_data)) return result except Exception as e: log_data = { "event": "Error processing data", "input": data, "error": str(e), "status": "failure" } logger.error(json.dumps(log_data)) return None # Example usage data = {"value": 10} process_data(data) data_with_error = {"text": "hello"} process_data(data_with_error) """ **Best Practices:** * Use appropriate log levels (DEBUG, INFO, WARNING, ERROR, CRITICAL). * Include timestamps, transaction IDs, and other contextual information in log messages. * Sanitize sensitive data before logging. * Implement log rotation to prevent disk space exhaustion. **Anti-Patterns:** * Logging sensitive information (e.g., passwords, API keys). * Using generic log messages that don't provide enough context. * Excessive logging that impacts performance. ### 2.3. Standard: Security **Do This:** Implement security best practices throughout the deployment pipeline. **Don't Do This:** Treat security as an afterthought. **Why:** Security vulnerabilities can lead to data breaches, service disruptions, and reputational damage. A security-first mindset is crucial for protecting applications and data. **Specific Recommendations:** * **Image Scanning:** Scan Docker images for vulnerabilities before deployment. Tools like Snyk, Anchore, and Clair can automate this process. * **Secrets Management:** Use a secrets management tool (e.g., HashiCorp Vault, AWS Secrets Manager) to store and manage sensitive credentials. Avoid hardcoding secrets in code or configuration files. * **Network Policies:** Implement network policies to restrict communication between services. This can prevent lateral movement in case of a security breach. * **Least Privilege:** Grant only the necessary permissions to each service. **Example (Using HashiCorp Vault for Secrets Management):** 1. **Store the database password in Vault:** """bash vault kv put secret/myapp/db password="your_database_password" """ 2. Retrieve the secret in your application: """python import hvac import os client = hvac.Client(url=os.environ['VAULT_ADDR'], token=os.environ['VAULT_TOKEN']) read_response = client.secrets.kv.v2.read_secret_version( path='myapp/db' ) db_password = read_response['data']['data']['password'] print(f"The DB password is: {db_password}") """ Modern Approches: * Service Mesh: Using service mesh technologies like Istio or Linkerd for secure service-to-service communication and observability. * Zero Trust Security: Implementing a zero-trust security model where every request is authenticated and authorized, regardless of its origin. ## 3. Applying Clean Code Principles to IaC and Configuration ### 3.1. Standard: DRY (Don't Repeat Yourself) **Do This:** Use modules and functions to avoid duplication in IaC code. **Don't Do This:** Copy and paste code blocks. **Why:** Duplication makes code harder to maintain and update. Modules and functions promote code reuse and reduce redundancy. **Example (Terraform Module):** """terraform # modules/ec2_instance/main.tf resource "aws_instance" "example" { ami = var.ami instance_type = var.instance_type tags = { Name = var.instance_name } } # modules/ec2_instance/variables.tf variable "ami" { type = string description = "The AMI to use for the instance" } variable "instance_type" { type = string description = "The instance type" } variable "instance_name" { type = string description = "The name of the instance" } """ Usage: """terraform module "web_server" { source = "./modules/ec2_instance" ami = "ami-0c55b43ad4bf5336f" instance_type = "t2.micro" instance_name = "WebServer" } module "db_server" { source = "./modules/ec2_instance" ami = "ami-0c55b43ad4bf5336f" instance_type = "t3.small" instance_name = "DatabaseServer" } """ ### 3.2. Standard: Single Responsibility Principle (SRP) **Do This:** Design modules and functions to have a single, well-defined purpose. **Don't Do This:** Create "god modules" that do everything. **Why:** SRP makes code easier to understand, test, and maintain. Modules should focus on a specific aspect of the infrastructure. **Example:** Instead of having one large module that creates the entire VPC, break it down into smaller modules: * "vpc" module: Creates the VPC. * "subnet" module: Creates subnets. * "security_group" module: Creates security groups. * "route_table" module: Creates route tables. ### 3.3. Standard: Readability **Do This:** Use meaningful variable names, comments, and formatting to make IaC code easy to understand. **Don't Do This:** Use cryptic names or inconsistent formatting. **Why:** Readability is essential for maintainability. Clear and concise code reduces the cognitive load for developers. **Example:** """terraform # Bad resource "aws_instance" "a" { ami = "ami-12345" instance_type = "t2.micro" } # Good resource "aws_instance" "web_server" { ami = "ami-0c55b43ad4bf5336f" # Amazon Linux 2 AMI instance_type = "t2.micro" # Small instance for web server tags = { Name = "web-server-instance" } } """ ### 3.4. Standard: Testability **Do This:** Write automated tests for IaC code to verify that it creates the expected infrastructure. **Don't Do This:** Manually verify infrastructure changes. **Why:** Automated testing provides confidence that infrastructure changes will not break existing systems. **Example (Using Terratest):** """go package test import ( "fmt" "testing" "github.com/gruntwork-io/terratest/modules/terraform" "github.com/stretchr/testify/assert" ) func TestTerraformAwsInstance(t *testing.T) { t.Parallel() terraformOptions := &terraform.Options{ TerraformDir: "../examples/aws_instance", } defer terraform.Destroy(t, terraformOptions) terraform.InitAndApply(t, terraformOptions) instancePublicIp := terraform.Output(t, terraformOptions, "public_ip") fmt.Println("Instance Public IP: ", instancePublicIp) assert.NotEmpty(t, instancePublicIp) //checking if output is empty or not } """ ## 4. Handling Configuration Data ### 4.1 Externalize Configuration Configuration should be externalized from the application code. For example, use environment variables, configuration files, or a dedicated configuration management system. """python import os # Accessing env vars. database_url = os.environ.get("DATABASE_URL") """ ### 4.2 Configuration Validation Validate configuration data at startup to ensure that it is valid and consistent. This can help to prevent runtime errors and ensure that the application is properly configured. """python def validate_config(config): if not config['api_key']: raise ValueError("API key must be set") #more config checks """ ### 4.3 Secrets Management Tools Securely manage secrets. Tools like Hashicorp Vault, AWS Secrets Manager should be used to store secrets. """python import hvac import os # Configure the Vault client client = hvac.Client(url=os.environ['VAULT_ADDR'], token=os.environ['VAULT_TOKEN']) # Read a secret response = client.secrets.kv.v2.read_secret(path='my-secret') secret_value = response['data']['data']['value'] """ ## 5. Conclusion Adhering to Clean Code principles in deployment and DevOps practices leads to more reliable, maintainable, and secure systems. By embracing automation, infrastructure as code, and security best practices, development teams can deliver value to customers faster and with greater confidence. This document provides a solid foundation for building a clean and efficient deployment pipeline. Remember that "Clean" is not a destination, but an ongoing journey and these guidelines should be reviewed and updated periodically to reflect the latest technologies and best practices.