Please contact the owner of the site that linked you to the original URL and let them know their link is broken.
diff --git a/assets/images/GrpcMetatype.drawio-e6e626affae448f5b44a48ab82805ee9.png b/assets/images/GrpcMetatype.drawio-e6e626affae448f5b44a48ab82805ee9.png
new file mode 100644
index 0000000000..d62e1b535d
Binary files /dev/null and b/assets/images/GrpcMetatype.drawio-e6e626affae448f5b44a48ab82805ee9.png differ
diff --git a/assets/js/4f68146b.3a54c1b3.js b/assets/js/4f68146b.3a54c1b3.js
deleted file mode 100644
index 60f1e377a7..0000000000
--- a/assets/js/4f68146b.3a54c1b3.js
+++ /dev/null
@@ -1 +0,0 @@
-"use strict";(self.webpackChunkwebsite=self.webpackChunkwebsite||[]).push([[1732],{53919:(e,n,r)=>{r.r(n),r.d(n,{assets:()=>d,contentTitle:()=>l,default:()=>u,frontMatter:()=>a,metadata:()=>c,toc:()=>h});var t=r(86070),s=r(25710),i=r(27676),o=r(65480);const a={},l="Distributed execution flow paradigms",c={permalink:"/blog/2024/08/27/distributed-execution-flow-paradigms",editUrl:"https://github.com/metatypedev/metatype/tree/main/docs/metatype.dev/blog/2024-08-27-distributed-execution-flow-paradigms/index.mdx",source:"@site/blog/2024-08-27-distributed-execution-flow-paradigms/index.mdx",title:"Distributed execution flow paradigms",description:"In this age of cloud development and microservices architecture, problems start to arise with the increased workloads that run in the system. Imagine an e-commerce platform where a customer places an order for a product during a high-demand sale event. The order triggers a series of interconnected processes: payment processing, inventory checks, packaging, shipping, and final delivery. Each of these processes might be handled by different microservices, potentially running on different servers or even in different data centers. What happens if the payment service goes down right after the payment is authorized but before the inventory is updated? Or if the packaging service fails just after the inventory is deducted but before the item is packed? Without a robust mechanism to ensure that each step in the workflow completes successfully and that failures are properly handled, you could end up with unhappy customers, lost orders, and inventory discrepancies.",date:"2024-08-27T00:00:00.000Z",tags:[],readingTime:10.92,hasTruncateMarker:!1,authors:[],frontMatter:{},unlisted:!1,nextItem:{title:"Python on WebAssembly: How?",permalink:"/blog/2024/08/26/python-on-webassembly"}},d={authorsImageUrls:[]},h=[{value:"1. Event-Driven Architecture with Message Queues",id:"1-event-driven-architecture-with-message-queues",level:3},{value:"Advantages",id:"advantages",level:4},{value:"Challenges",id:"challenges",level:4},{value:"2. The Saga Pattern",id:"2-the-saga-pattern",level:3},{value:"Advantages",id:"advantages-1",level:4},{value:"Drawbacks",id:"drawbacks",level:4},{value:"3. Stateful Orchestrators",id:"3-stateful-orchestrators",level:3},{value:"Advantages",id:"advantages-2",level:4},{value:"Challenges",id:"challenges-1",level:4},{value:"4. Durable Execution",id:"4-durable-execution",level:3},{value:"Advantages",id:"advantages-3",level:4},{value:"Challenges",id:"challenges-2",level:4}];function p(e){const n={a:"a",admonition:"admonition",code:"code",h3:"h3",h4:"h4",img:"img",li:"li",p:"p",pre:"pre",strong:"strong",ul:"ul",...(0,s.R)(),...e.components},{Details:a}=n;return a||function(e,n){throw new Error("Expected "+(n?"component":"object")+" `"+e+"` to be defined: you likely forgot to import, pass, or provide it.")}("Details",!0),(0,t.jsxs)(t.Fragment,{children:[(0,t.jsx)(n.p,{children:"In this age of cloud development and microservices architecture, problems start to arise with the increased workloads that run in the system. Imagine an e-commerce platform where a customer places an order for a product during a high-demand sale event. The order triggers a series of interconnected processes: payment processing, inventory checks, packaging, shipping, and final delivery. Each of these processes might be handled by different microservices, potentially running on different servers or even in different data centers. What happens if the payment service goes down right after the payment is authorized but before the inventory is updated? Or if the packaging service fails just after the inventory is deducted but before the item is packed? Without a robust mechanism to ensure that each step in the workflow completes successfully and that failures are properly handled, you could end up with unhappy customers, lost orders, and inventory discrepancies."}),"\n",(0,t.jsx)(n.p,{children:"Having multiple components in your system introduces more failure points, which is a common phenomenon in complex systems. But one important behavior any application must ensure is that the execution flow reaches its completion. As systems grow in features and complexity, the likelihood of long-running processes increases. To ensure these processes complete as intended, several solutions have been introduced over the last few decades.\nLet's explore some of the solutions that have been proposed to achieve workflow completeness."}),"\n",(0,t.jsx)(n.h3,{id:"1-event-driven-architecture-with-message-queues",children:"1. Event-Driven Architecture with Message Queues"}),"\n",(0,t.jsx)(n.p,{children:"This architecture relies heavily on services communicating by publishing and subscribing to events using message queues. Message queues are persistent storages that ensure data is not lost during failures or service unavailability. Components in a distributed system synchronize by using events/messages through these independent services. While this approach offers service decomposability and fault tolerance, it has some shortcomings. For example, using message queues comes with the overhead of managing messages (e.g., deduplication and message ordering). It also isn\u2019t ideal for systems requiring immediate consistency across components. Some technologies and patterns that utilize this architecture include:"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:(0,t.jsx)(n.a,{href:"https://www.rabbitmq.com/",children:"RabbitMQ"})}),"\n",(0,t.jsx)(n.li,{children:(0,t.jsx)(n.a,{href:"https://aws.amazon.com/sqs/",children:"Amazon SQS"})}),"\n"]}),"\n",(0,t.jsx)(n.p,{children:(0,t.jsx)(n.img,{src:r(16676).A+""})}),"\n",(0,t.jsx)("div",{style:{marginLeft:"5em"},children:(0,t.jsx)(n.p,{children:"Fig. Event Driven Architecture with Message Queues - RabbitMQ"})}),"\n",(0,t.jsx)(n.h4,{id:"advantages",children:"Advantages"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:"Improved Scalability"}),"\n",(0,t.jsx)(n.li,{children:"Enhanced Responsiveness"}),"\n",(0,t.jsx)(n.li,{children:"Enhanced Fault Tolerance"}),"\n",(0,t.jsx)(n.li,{children:"Simplified Complex Workflows"}),"\n",(0,t.jsx)(n.li,{children:"Real-Time Data Processing"}),"\n"]}),"\n",(0,t.jsx)(n.h4,{id:"challenges",children:"Challenges"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:"Event Ordering"}),"\n",(0,t.jsx)(n.li,{children:"Data Consistency"}),"\n",(0,t.jsx)(n.li,{children:"Monitoring and Debugging"}),"\n",(0,t.jsx)(n.li,{children:"Event Deduplication"}),"\n"]}),"\n",(0,t.jsx)(n.p,{children:"You can mitigate or reduce these challenges by following best practices like Event Sourcing, Idempotent Processing, CQRS (Command Query Responsibility Segregation), and Event Versioning."}),"\n",(0,t.jsxs)(n.h3,{id:"2-the-saga-pattern",children:["2. The ",(0,t.jsx)(n.a,{href:"https://microservices.io/patterns/data/saga.html",children:"Saga Pattern"})]}),"\n",(0,t.jsx)(n.p,{children:"This design pattern aims to achieve consistency across different services in a distributed system by breaking complex transactions spanning multiple components into a series of local transactions. Each of these transactions triggers an event or message that starts the next transaction in the sequence. If any local transaction fails to complete, a series of compensating actions roll back the effects of preceding transactions. While the orchestration of local transactions can vary, the pattern aims to achieve consistency in a microservices-based system. Events are designed to be stored in durable storage systems or logs, providing a trail to reconstruct the system to a state after a failure. While the saga pattern is an effective way to ensure consistency, it can be challenging to implement timer/timeout-based workflows and to design and implement the compensating actions for local transactions."}),"\n",(0,t.jsxs)(n.p,{children:[(0,t.jsx)(n.strong,{children:"Note"}),": In the Saga pattern, a compensating transaction must be idempotent and retryable. These principles ensure that transactions can be managed without manual intervention."]}),"\n",(0,t.jsx)(n.p,{children:(0,t.jsx)(n.img,{src:r(35936).A+""})}),"\n",(0,t.jsx)("div",{style:{marginLeft:"10em"},children:(0,t.jsx)(n.p,{children:"Fig. The Saga Pattern for Order delivery system"})}),"\n",(0,t.jsx)(n.h4,{id:"advantages-1",children:"Advantages"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:"Ensures data consistency in a distributed system without tight coupling."}),"\n",(0,t.jsx)(n.li,{children:"Provides Roll back if one of the operations in the sequence fails."}),"\n"]}),"\n",(0,t.jsx)(n.h4,{id:"drawbacks",children:"Drawbacks"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:"Might be challenging to implement initially."}),"\n",(0,t.jsx)(n.li,{children:"Hard to debug."}),"\n",(0,t.jsx)(n.li,{children:"Compensating transactions don\u2019t always work."}),"\n"]}),"\n",(0,t.jsxs)(n.h3,{id:"3-stateful-orchestrators",children:["3. ",(0,t.jsx)(n.a,{href:"https://docs.oracle.com/en/applications/jd-edwards/cross-product/9.2/eotos/creating-a-stateful-orchestration-release-9-2-8-3.html#u30249073",children:"Stateful Orchestrators"})]}),"\n",(0,t.jsx)(n.p,{children:"Stateful orchestrators provide a solution for long-running workflows by maintaining the state of each step in a workflow. Each step in a workflow represents a task, and these tasks are represented as states inside workflows. Workflows are defined as state machines or directed acyclic graphs (DAGs). In this approach, an orchestrator handles task execution order, transitioning, handling retries, and maintaining state. In the event of a failure, the system can recover from the persisted state. Stateful orchestrators offer significant value in fault tolerance, consistency, and observability. It\u2019s one of the solutions proven effective in modern distributed computing. Some well-known services that provide this solution include:"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:(0,t.jsx)(n.a,{href:"https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/overview.html",children:"Apache Airflow"})}),"\n",(0,t.jsx)(n.li,{children:(0,t.jsx)(n.a,{href:"https://azure.microsoft.com/en-us/products/logic-apps",children:"Azure Logic Apps"})}),"\n"]}),"\n",(0,t.jsx)(n.h4,{id:"advantages-2",children:"Advantages"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"High Resiliency"}),": Stateful orchestrators provide high resiliency in case of outages, ensuring that workflows can continue from where they left off."]}),"\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Data Persistence"}),": They allow you to keep, review, or reference data from previous events, which is useful for long-running processes."]}),"\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Extended Runtime"}),": Stateful workflows can continue running for much longer than stateless workflows, making them suitable for complex and long-running tasks."]}),"\n"]}),"\n",(0,t.jsx)(n.h4,{id:"challenges-1",children:"Challenges"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Additional Complexity"}),": They introduce additional complexity, requiring you to manage issues such as load balancing, CPU and memory usage, and networking."]}),"\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Cost"}),": With stateful workflows, you pay for the VMs that are running in the cluster, whereas with stateless workflows, you pay only for the actual compute resources consumed."]}),"\n"]}),"\n",(0,t.jsx)(n.h3,{id:"4-durable-execution",children:"4. Durable Execution"}),"\n",(0,t.jsx)(n.p,{children:"Durable execution refers to the ability of a system to preserve the state of an application and persist execution despite failures or interruptions. Durable execution ensures that for every task, its inputs, outputs, call stack, and local variables are persisted. These constraints, or rather features, allow a system to automatically retry or continue running in the face of infrastructure or system failures, ultimately ensuring completion."}),"\n",(0,t.jsx)(n.p,{children:"Durable execution isn\u2019t a completely distinct solution from the ones listed above but rather incorporates some of their strengths while presenting a more comprehensive approach to achieving consistency, fault tolerance, data integrity, resilience for long-running processes, and observability."}),"\n",(0,t.jsx)("img",{src:"/images/blog/execution-flow-paradigms/durable-exec.svg",alt:"Durable workflow engine - Temporal"}),"\n",(0,t.jsx)("div",{style:{marginLeft:"15em"},children:"Fig. Durable workflow engine"}),"\n",(0,t.jsx)(n.h4,{id:"advantages-3",children:"Advantages"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Reduced Manual Intervention"}),": Minimizes the need for human intervention by handling retries and failures programmatically."]}),"\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Improved Observability"}),": Provides a clear audit trail and visibility into the state of workflows, which aids in debugging and monitoring."]}),"\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Scalability"}),": Scales efficiently across distributed systems while maintaining workflow integrity."]}),"\n"]}),"\n",(0,t.jsx)(n.h4,{id:"challenges-2",children:"Challenges"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Resource Intensive"}),": Persistent state storage and management can consume significant resources, especially in large-scale systems."]}),"\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Latency"}),": The need to persist state and handle retries can introduce latency in the execution flow."]}),"\n"]}),"\n",(0,t.jsx)(n.p,{children:"As durable execution grows to be a fundamental driver of distributed computing, some of the solutions which use this architecture are"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:(0,t.jsx)(n.a,{href:"https://temporal.io/",children:"Temporal"})}),"\n",(0,t.jsx)(n.li,{children:(0,t.jsx)(n.a,{href:"https://cadenceworkflow.io/",children:"Uber Cadence"})}),"\n"]}),"\n",(0,t.jsxs)(n.p,{children:["Among these, ",(0,t.jsx)(n.a,{href:"https://temporal.io/",children:"Temporal"})," has grown in influence, used by companies like SnapChat, HashiCorp, Stripe, DoorDash, and DataDog. Its success is driven by its practical application in real-world scenarios and the expertise of its founders."]}),"\n",(0,t.jsxs)(n.p,{children:["At Metatype, we recognize the value of durable execution and are committed to making it accessible. Our ",(0,t.jsx)(n.a,{href:"/docs/reference/runtimes/temporal",children:"Temporal Runtime"})," integrates seamlessly into our declarative API development platform, enabling users to harness the power of Temporal directly within Metatype. For those interested in exploring further, our documentation provides a detailed guide on getting started with ",(0,t.jsx)(n.a,{href:"/docs/reference/runtimes/temporal",children:"Temporal Runtime"}),"."]}),"\n",(0,t.jsx)(n.p,{children:"Below is an example of how you can build a simple API to interact with an order delivery temporal workflow within Metatype."}),"\n",(0,t.jsx)(n.admonition,{type:"note",children:(0,t.jsxs)(n.p,{children:["If you are new to Metatype or haven\u2019t set it up yet in your development environment. You can follow this ",(0,t.jsx)(n.a,{href:"/docs/tutorials/quick-start",children:"guideline"}),"."]})}),"\n",(0,t.jsx)(n.p,{children:"For this example, the order delivery system will have few components/services such as Payment, Inventory and Delivery."}),"\n",(0,t.jsx)(n.p,{children:"Your temporal workflow definition should look similar to the one below."}),"\n",(0,t.jsxs)(o.Ay,{children:[(0,t.jsxs)(i.A,{value:"typescript",children:[(0,t.jsxs)(a,{children:[(0,t.jsxs)("summary",{children:["Activities definition inside ",(0,t.jsx)(n.code,{children:"src/activities.ts"}),":`"]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-typescript",children:'async function sleep(time: number) {\n return new Promise((resolve) => {\n setTimeout(resolve, time);\n });\n}\n\nexport async function processPayment(orderId: string): Promise {\n console.log(`Processing payment for order ${orderId}`);\n // Simulate payment processing logic\n await sleep(2);\n return "Payment processed";\n}\n\nexport async function checkInventory(orderId: string): Promise {\n console.log(`Checking inventory for order ${orderId}`);\n // Simulate inventory check logic\n await sleep(2);\n return "Inventory available";\n}\n\nexport async function deliverOrder(orderId: string): Promise {\n console.log(`Delivering order ${orderId}`);\n // Simulate delivery logic\n await sleep(5);\n return "Order delivered";\n}\n'})})]}),(0,t.jsxs)(a,{children:[(0,t.jsxs)("summary",{children:["Workflow definition inside ",(0,t.jsx)(n.code,{children:"src/workflows.ts"}),":"]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-typescript",metastring:'import {proxyActivities} from "@temporalio/workflow";',children:'\nexport const { processPayment, checkInventory, deliverOrder } =\n proxyActivities<{\n processPayment(orderId: string): Promise;\n checkInventory(orderId: string): Promise;\n deliverOrder(orderId: string): Promise;\n }>({\n startToCloseTimeout: "10 seconds",\n });\n\nexport async function OrderWorkflow(orderId: string): Promise {\n const paymentResult = await processPayment(orderId);\n const inventoryResult = await checkInventory(orderId);\n const deliveryResult = await deliverOrder(orderId);\n return `Order ${orderId} completed with results: ${paymentResult}, ${inventoryResult}, ${deliveryResult}`;\n}\n'})})]}),(0,t.jsxs)(a,{children:[(0,t.jsxs)("summary",{children:["Worker definintion inside ",(0,t.jsx)(n.code,{children:"src/worker.ts"}),":"]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-typescript",children:'import { NativeConnection, Worker } from "@temporalio/worker";\nimport * as activities from "./activities";\nimport { TASK_QUEUE_NAME } from "./shared";\n\nasync function run() {\n const connection = await NativeConnection.connect({\n address: "localhost:7233",\n });\n\n const worker = await Worker.create({\n connection,\n namespace: "default",\n taskQueue: TASK_QUEUE_NAME,\n workflowsPath: require.resolve("./workflows"),\n activities,\n });\n\n await worker.run();\n}\n\nrun().catch((err) => {\n console.error(err);\n process.exit(1);\n});\n'})})]}),(0,t.jsxs)(n.p,{children:["After you have setup the above components, now you need a client to start of any ",(0,t.jsx)(n.code,{children:"OrderWorkflow"}),". Here is where metatype comes in, through the simple APIs ",(0,t.jsx)(n.a,{href:"/docs/reference/runtimes/temporal",children:"Temporal Runtime"})," exposes, you can communicate with your temporal cluster.\nDown below is the workflow communication bridge for this system expressed within a ",(0,t.jsx)(n.a,{href:"/docs/reference/typegraph",children:"typegraph"})," which includes endpoints to start a new workflow and describe an existing one."]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-typescript",children:'import { Policy, t, typegraph } from "@typegraph/sdk/index.ts";\nimport { TemporalRuntime } from "@typegraph/sdk/providers/temporal.ts";\n\ntypegraph(\n {\n name: "order_delivery",\n },\n (g: any) => {\n const pub = Policy.public();\n\n const temporal = new TemporalRuntime({\n name: "order_delivery",\n hostSecret: "HOST",\n namespaceSecret: "NAMESPACE",\n });\n\n const workflow_id = "order-delivery-1";\n\n const order_id = t.string();\n\n g.expose(\n {\n start: temporal.startWorkflow("OrderWorkflow", order_id),\n describe: workflow_id\n ? temporal.describeWorkflow().reduce({ workflow_id })\n : temporal.describeWorkflow(),\n },\n pub,\n );\n },\n);\n'})})]}),(0,t.jsxs)(i.A,{value:"python",children:[(0,t.jsxs)(a,{children:[(0,t.jsxs)("summary",{children:["Activities definition inside ",(0,t.jsx)(n.code,{children:"activities.py"}),"."]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-python",children:'from temporalio import activity\nimport time\n\n@activity.defn\nasync def process_payment(order_id: str) -> str:\n print(f"Processing payment for order {order_id}")\n # Simulate payment processing logic\n time.sleep(5)\n return "Payment processed"\n\n@activity.defn\nasync def check_inventory(order_id: str) -> str:\n print(f"Checking inventory for order {order_id}")\n # Simulate inventory check logic\n time.sleep(4)\n return "Inventory available"\n\n@activity.defn\nasync def deliver_order(order_id: str) -> str:\n print(f"Delivering order {order_id}")\n time.sleep(8)\n # Simulate delivery logic\n return "Order delivered"\n'})})]}),(0,t.jsxs)(a,{children:[(0,t.jsxs)("summary",{children:["Worker defintion inside ",(0,t.jsx)(n.code,{children:"run_worker.py"}),"."]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-python",children:'import asyncio\n\nfrom temporalio.client import Client\nfrom temporalio.worker import Worker\n\nfrom activities import process_payment, deliver_order, check_inventory\nfrom shared import ORDER_DELIVERY_QUEUE\nfrom workflows import OrderWorkflow\n\n\nasync def main() -> None:\n client: Client = await Client.connect("localhost:7233", namespace="default")\n worker: Worker = Worker(\n client,\n task_queue=ORDER_DELIVERY_QUEUE,\n workflows=[OrderWorkflow],\n activities=[process_payment, check_inventory, deliver_order],\n )\n await worker.run()\n\n\nif __name__ == "__main__":\n asyncio.run(main())\n'})})]}),(0,t.jsxs)(n.p,{children:["After you have setup the above components, now you need a client to start of any ",(0,t.jsx)(n.code,{children:"OrderWorkflow"}),". Here is where metatype comes in, through the simple APIs ",(0,t.jsx)(n.a,{href:"/docs/reference/runtimes/temporal",children:"Temporal Runtime"})," exposes, you can communicate with your temporal cluster.\nDown below is the workflow communication bridge for this system expressed within a ",(0,t.jsx)(n.a,{href:"/docs/reference/typegraph",children:"typegraph"})," which includes endpoints to start a new workflow and describe an existing one."]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-python",children:'from typegraph import t, typegraph, Policy, Graph\nfrom typegraph.providers.temporal import TemporalRuntime\n\n\n@typegraph()\ndef example(g: Graph):\n public = Policy.public()\n\n temporal = TemporalRuntime(\n "example", "HOST", namespace_secret="NAMESPACE"\n )\n\n workflow_id = "order-delivery-1"\n\n order_id = t.string()\n\n g.expose(\n public,\n start=temporal.start_workflow("OrderWorkflow", order_id),\n describe=temporal.describe_workflow().reduce({"workflow_id": workflow_id})\n if workflow_id\n else temporal.describe_workflow(),\n )\n'})})]})]}),"\n",(0,t.jsxs)(n.p,{children:["You need to add the secrets ",(0,t.jsx)(n.code,{children:"HOST"})," and ",(0,t.jsx)(n.code,{children:"NAMESPACE"})," under your typegraph name inside the ",(0,t.jsx)(n.code,{children:"metatype.yaml"})," file. These secrets are important to connect with your temporal cluster and can be safely stored in the config file as shown below."]}),"\n",(0,t.jsxs)(a,{children:[(0,t.jsx)("summary",{children:"metatype.yaml"}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-yaml",children:'typegate:\n dev:\n url: "http://localhost:7890"\n username: admin\n password: password\n secrets:\n example:\n POSTGRES: "postgresql://postgres:password@postgres:5432/db"\n HOST: "http://localhost:7233"\n NAMESPACE: "default"\n'})})]}),"\n",(0,t.jsxs)(n.p,{children:["You need to add only the last two lines as the others are auto-generated. Note that secrets are defined under the ",(0,t.jsx)(n.code,{children:"example"})," parent, which is the name of your typegraph. If the name doesn't match, you will face secret not found issues when deploying your typegraph."]}),"\n",(0,t.jsxs)(n.p,{children:["Before deploying the above typegraph, you need to start the temporal server and the worker. You need to have ",(0,t.jsx)(n.a,{href:"https://learn.temporal.io/getting_started/typescript/dev_environment/#set-up-a-local-temporal-service-for-development-with-temporal-cli",children:"temporal"})," installed on your machine."]}),"\n",(0,t.jsxs)(a,{children:[(0,t.jsx)("summary",{children:"Boot up temporal"}),(0,t.jsx)(n.p,{children:"Start the temporal server."}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-bash",children:"temporal server start-dev\n"})}),(0,t.jsx)(n.p,{children:"Start the worker."}),(0,t.jsxs)(o.Ay,{children:[(0,t.jsx)(i.A,{value:"typescript",children:(0,t.jsx)(n.p,{children:(0,t.jsx)(n.code,{children:"typescript npx ts-node src/worker.ts "})})}),(0,t.jsx)(i.A,{value:"python",children:(0,t.jsx)(n.code,{children:"python python run_worker.py "})})]})]}),"\n",(0,t.jsxs)(n.p,{children:["After booting the temporal server, run the command down below to get a locally spinning ",(0,t.jsx)(n.a,{href:"/docs/reference/typegate",children:"typegate"})," instance with your typegraph deployed."]}),"\n",(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-bash",children:"meta dev\n"})}),"\n",(0,t.jsxs)(n.p,{children:["After completing the above steps, you can access the web GraphQL client of the typegate at ",(0,t.jsx)(n.a,{href:"http://localhost:7890/example",children:(0,t.jsx)(n.code,{children:"http://localhost:7890/example"})}),". Run this query inside the client to start your workflow."]}),"\n",(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-graphql",children:'mutation {\n start(\n workflow_id: "order-delivery-3"\n task_queue: "order-delivery-queue"\n args: ["order12"]\n )\n}\n'})}),"\n",(0,t.jsxs)(n.p,{children:["After a successful run, you will get the following result which includes the ",(0,t.jsx)(n.code,{children:"run_id"})," of the workflow which has just been started."]}),"\n",(0,t.jsx)("img",{src:"/images/blog/execution-flow-paradigms/start-workflow-result.png",alt:"Query result"}),"\n",(0,t.jsx)(n.p,{children:"You can also check the temporal web UI to monitor your workflows and you should see a result similar to this one."}),"\n",(0,t.jsx)("img",{src:"/images/blog/execution-flow-paradigms/temporal-web-ui.png",alt:"Workflows dashboard"}),"\n",(0,t.jsxs)(n.p,{children:["You can explore the ",(0,t.jsx)(n.a,{href:"/docs/reference/runtimes/temporal",children:"Temporal Runtime"})," for more info."]}),"\n",(0,t.jsx)(n.p,{children:"This wraps up the blog, thanks for reading until the end :)"})]})}function u(e={}){const{wrapper:n}={...(0,s.R)(),...e.components};return n?(0,t.jsx)(n,{...e,children:(0,t.jsx)(p,{...e})}):p(e)}},65480:(e,n,r)=>{r.d(n,{Ay:()=>o,gc:()=>a});r(30758);var t=r(3733),s=r(56315),i=r(86070);function o(e){let{children:n}=e;const[r,o]=(0,t.e)();return(0,i.jsx)(s.mS,{choices:{typescript:"Typescript SDK",python:"Python SDK"},choice:r,onChange:o,children:n})}function a(e){let{children:n}=e;const[r]=(0,t.e)();return(0,i.jsx)(s.q9,{choices:{typescript:"Typescript SDK",python:"Python SDK"},choice:r,children:n})}},16676:(e,n,r)=>{r.d(n,{A:()=>t});const t=r.p+"assets/images/eda.drawio-9d730aef7e9f00ffed737626d602be5c.svg"},35936:(e,n,r)=>{r.d(n,{A:()=>t});const t=r.p+"assets/images/saga.drawio-6f492c8332ead1021dde63fa7daf0efd.svg"}}]);
\ No newline at end of file
diff --git a/assets/js/4f68146b.edca1cc7.js b/assets/js/4f68146b.edca1cc7.js
new file mode 100644
index 0000000000..43a22535d5
--- /dev/null
+++ b/assets/js/4f68146b.edca1cc7.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkwebsite=self.webpackChunkwebsite||[]).push([[1732],{53919:(e,n,r)=>{r.r(n),r.d(n,{assets:()=>d,contentTitle:()=>l,default:()=>u,frontMatter:()=>a,metadata:()=>c,toc:()=>h});var t=r(86070),s=r(25710),i=r(27676),o=r(65480);const a={},l="Distributed execution flow paradigms",c={permalink:"/blog/2024/08/27/distributed-execution-flow-paradigms",editUrl:"https://github.com/metatypedev/metatype/tree/main/docs/metatype.dev/blog/2024-08-27-distributed-execution-flow-paradigms/index.mdx",source:"@site/blog/2024-08-27-distributed-execution-flow-paradigms/index.mdx",title:"Distributed execution flow paradigms",description:"In this age of cloud development and microservices architecture, problems start to arise with the increased workloads that run in the system. Imagine an e-commerce platform where a customer places an order for a product during a high-demand sale event. The order triggers a series of interconnected processes: payment processing, inventory checks, packaging, shipping, and final delivery. Each of these processes might be handled by different microservices, potentially running on different servers or even in different data centers. What happens if the payment service goes down right after the payment is authorized but before the inventory is updated? Or if the packaging service fails just after the inventory is deducted but before the item is packed? Without a robust mechanism to ensure that each step in the workflow completes successfully and that failures are properly handled, you could end up with unhappy customers, lost orders, and inventory discrepancies.",date:"2024-08-27T00:00:00.000Z",tags:[],readingTime:10.92,hasTruncateMarker:!1,authors:[],frontMatter:{},unlisted:!1,prevItem:{title:"Introducing gRPC Runtime",permalink:"/blog/2024/09/26/introducing-grpc-runtime"},nextItem:{title:"Python on WebAssembly: How?",permalink:"/blog/2024/08/26/python-on-webassembly"}},d={authorsImageUrls:[]},h=[{value:"1. Event-Driven Architecture with Message Queues",id:"1-event-driven-architecture-with-message-queues",level:3},{value:"Advantages",id:"advantages",level:4},{value:"Challenges",id:"challenges",level:4},{value:"2. The Saga Pattern",id:"2-the-saga-pattern",level:3},{value:"Advantages",id:"advantages-1",level:4},{value:"Drawbacks",id:"drawbacks",level:4},{value:"3. Stateful Orchestrators",id:"3-stateful-orchestrators",level:3},{value:"Advantages",id:"advantages-2",level:4},{value:"Challenges",id:"challenges-1",level:4},{value:"4. Durable Execution",id:"4-durable-execution",level:3},{value:"Advantages",id:"advantages-3",level:4},{value:"Challenges",id:"challenges-2",level:4}];function p(e){const n={a:"a",admonition:"admonition",code:"code",h3:"h3",h4:"h4",img:"img",li:"li",p:"p",pre:"pre",strong:"strong",ul:"ul",...(0,s.R)(),...e.components},{Details:a}=n;return a||function(e,n){throw new Error("Expected "+(n?"component":"object")+" `"+e+"` to be defined: you likely forgot to import, pass, or provide it.")}("Details",!0),(0,t.jsxs)(t.Fragment,{children:[(0,t.jsx)(n.p,{children:"In this age of cloud development and microservices architecture, problems start to arise with the increased workloads that run in the system. Imagine an e-commerce platform where a customer places an order for a product during a high-demand sale event. The order triggers a series of interconnected processes: payment processing, inventory checks, packaging, shipping, and final delivery. Each of these processes might be handled by different microservices, potentially running on different servers or even in different data centers. What happens if the payment service goes down right after the payment is authorized but before the inventory is updated? Or if the packaging service fails just after the inventory is deducted but before the item is packed? Without a robust mechanism to ensure that each step in the workflow completes successfully and that failures are properly handled, you could end up with unhappy customers, lost orders, and inventory discrepancies."}),"\n",(0,t.jsx)(n.p,{children:"Having multiple components in your system introduces more failure points, which is a common phenomenon in complex systems. But one important behavior any application must ensure is that the execution flow reaches its completion. As systems grow in features and complexity, the likelihood of long-running processes increases. To ensure these processes complete as intended, several solutions have been introduced over the last few decades.\nLet's explore some of the solutions that have been proposed to achieve workflow completeness."}),"\n",(0,t.jsx)(n.h3,{id:"1-event-driven-architecture-with-message-queues",children:"1. Event-Driven Architecture with Message Queues"}),"\n",(0,t.jsx)(n.p,{children:"This architecture relies heavily on services communicating by publishing and subscribing to events using message queues. Message queues are persistent storages that ensure data is not lost during failures or service unavailability. Components in a distributed system synchronize by using events/messages through these independent services. While this approach offers service decomposability and fault tolerance, it has some shortcomings. For example, using message queues comes with the overhead of managing messages (e.g., deduplication and message ordering). It also isn\u2019t ideal for systems requiring immediate consistency across components. Some technologies and patterns that utilize this architecture include:"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:(0,t.jsx)(n.a,{href:"https://www.rabbitmq.com/",children:"RabbitMQ"})}),"\n",(0,t.jsx)(n.li,{children:(0,t.jsx)(n.a,{href:"https://aws.amazon.com/sqs/",children:"Amazon SQS"})}),"\n"]}),"\n",(0,t.jsx)(n.p,{children:(0,t.jsx)(n.img,{src:r(16676).A+""})}),"\n",(0,t.jsx)("div",{style:{marginLeft:"5em"},children:(0,t.jsx)(n.p,{children:"Fig. Event Driven Architecture with Message Queues - RabbitMQ"})}),"\n",(0,t.jsx)(n.h4,{id:"advantages",children:"Advantages"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:"Improved Scalability"}),"\n",(0,t.jsx)(n.li,{children:"Enhanced Responsiveness"}),"\n",(0,t.jsx)(n.li,{children:"Enhanced Fault Tolerance"}),"\n",(0,t.jsx)(n.li,{children:"Simplified Complex Workflows"}),"\n",(0,t.jsx)(n.li,{children:"Real-Time Data Processing"}),"\n"]}),"\n",(0,t.jsx)(n.h4,{id:"challenges",children:"Challenges"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:"Event Ordering"}),"\n",(0,t.jsx)(n.li,{children:"Data Consistency"}),"\n",(0,t.jsx)(n.li,{children:"Monitoring and Debugging"}),"\n",(0,t.jsx)(n.li,{children:"Event Deduplication"}),"\n"]}),"\n",(0,t.jsx)(n.p,{children:"You can mitigate or reduce these challenges by following best practices like Event Sourcing, Idempotent Processing, CQRS (Command Query Responsibility Segregation), and Event Versioning."}),"\n",(0,t.jsxs)(n.h3,{id:"2-the-saga-pattern",children:["2. The ",(0,t.jsx)(n.a,{href:"https://microservices.io/patterns/data/saga.html",children:"Saga Pattern"})]}),"\n",(0,t.jsx)(n.p,{children:"This design pattern aims to achieve consistency across different services in a distributed system by breaking complex transactions spanning multiple components into a series of local transactions. Each of these transactions triggers an event or message that starts the next transaction in the sequence. If any local transaction fails to complete, a series of compensating actions roll back the effects of preceding transactions. While the orchestration of local transactions can vary, the pattern aims to achieve consistency in a microservices-based system. Events are designed to be stored in durable storage systems or logs, providing a trail to reconstruct the system to a state after a failure. While the saga pattern is an effective way to ensure consistency, it can be challenging to implement timer/timeout-based workflows and to design and implement the compensating actions for local transactions."}),"\n",(0,t.jsxs)(n.p,{children:[(0,t.jsx)(n.strong,{children:"Note"}),": In the Saga pattern, a compensating transaction must be idempotent and retryable. These principles ensure that transactions can be managed without manual intervention."]}),"\n",(0,t.jsx)(n.p,{children:(0,t.jsx)(n.img,{src:r(35936).A+""})}),"\n",(0,t.jsx)("div",{style:{marginLeft:"10em"},children:(0,t.jsx)(n.p,{children:"Fig. The Saga Pattern for Order delivery system"})}),"\n",(0,t.jsx)(n.h4,{id:"advantages-1",children:"Advantages"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:"Ensures data consistency in a distributed system without tight coupling."}),"\n",(0,t.jsx)(n.li,{children:"Provides Roll back if one of the operations in the sequence fails."}),"\n"]}),"\n",(0,t.jsx)(n.h4,{id:"drawbacks",children:"Drawbacks"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:"Might be challenging to implement initially."}),"\n",(0,t.jsx)(n.li,{children:"Hard to debug."}),"\n",(0,t.jsx)(n.li,{children:"Compensating transactions don\u2019t always work."}),"\n"]}),"\n",(0,t.jsxs)(n.h3,{id:"3-stateful-orchestrators",children:["3. ",(0,t.jsx)(n.a,{href:"https://docs.oracle.com/en/applications/jd-edwards/cross-product/9.2/eotos/creating-a-stateful-orchestration-release-9-2-8-3.html#u30249073",children:"Stateful Orchestrators"})]}),"\n",(0,t.jsx)(n.p,{children:"Stateful orchestrators provide a solution for long-running workflows by maintaining the state of each step in a workflow. Each step in a workflow represents a task, and these tasks are represented as states inside workflows. Workflows are defined as state machines or directed acyclic graphs (DAGs). In this approach, an orchestrator handles task execution order, transitioning, handling retries, and maintaining state. In the event of a failure, the system can recover from the persisted state. Stateful orchestrators offer significant value in fault tolerance, consistency, and observability. It\u2019s one of the solutions proven effective in modern distributed computing. Some well-known services that provide this solution include:"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:(0,t.jsx)(n.a,{href:"https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/overview.html",children:"Apache Airflow"})}),"\n",(0,t.jsx)(n.li,{children:(0,t.jsx)(n.a,{href:"https://azure.microsoft.com/en-us/products/logic-apps",children:"Azure Logic Apps"})}),"\n"]}),"\n",(0,t.jsx)(n.h4,{id:"advantages-2",children:"Advantages"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"High Resiliency"}),": Stateful orchestrators provide high resiliency in case of outages, ensuring that workflows can continue from where they left off."]}),"\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Data Persistence"}),": They allow you to keep, review, or reference data from previous events, which is useful for long-running processes."]}),"\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Extended Runtime"}),": Stateful workflows can continue running for much longer than stateless workflows, making them suitable for complex and long-running tasks."]}),"\n"]}),"\n",(0,t.jsx)(n.h4,{id:"challenges-1",children:"Challenges"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Additional Complexity"}),": They introduce additional complexity, requiring you to manage issues such as load balancing, CPU and memory usage, and networking."]}),"\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Cost"}),": With stateful workflows, you pay for the VMs that are running in the cluster, whereas with stateless workflows, you pay only for the actual compute resources consumed."]}),"\n"]}),"\n",(0,t.jsx)(n.h3,{id:"4-durable-execution",children:"4. Durable Execution"}),"\n",(0,t.jsx)(n.p,{children:"Durable execution refers to the ability of a system to preserve the state of an application and persist execution despite failures or interruptions. Durable execution ensures that for every task, its inputs, outputs, call stack, and local variables are persisted. These constraints, or rather features, allow a system to automatically retry or continue running in the face of infrastructure or system failures, ultimately ensuring completion."}),"\n",(0,t.jsx)(n.p,{children:"Durable execution isn\u2019t a completely distinct solution from the ones listed above but rather incorporates some of their strengths while presenting a more comprehensive approach to achieving consistency, fault tolerance, data integrity, resilience for long-running processes, and observability."}),"\n",(0,t.jsx)("img",{src:"/images/blog/execution-flow-paradigms/durable-exec.svg",alt:"Durable workflow engine - Temporal"}),"\n",(0,t.jsx)("div",{style:{marginLeft:"15em"},children:"Fig. Durable workflow engine"}),"\n",(0,t.jsx)(n.h4,{id:"advantages-3",children:"Advantages"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Reduced Manual Intervention"}),": Minimizes the need for human intervention by handling retries and failures programmatically."]}),"\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Improved Observability"}),": Provides a clear audit trail and visibility into the state of workflows, which aids in debugging and monitoring."]}),"\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Scalability"}),": Scales efficiently across distributed systems while maintaining workflow integrity."]}),"\n"]}),"\n",(0,t.jsx)(n.h4,{id:"challenges-2",children:"Challenges"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Resource Intensive"}),": Persistent state storage and management can consume significant resources, especially in large-scale systems."]}),"\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Latency"}),": The need to persist state and handle retries can introduce latency in the execution flow."]}),"\n"]}),"\n",(0,t.jsx)(n.p,{children:"As durable execution grows to be a fundamental driver of distributed computing, some of the solutions which use this architecture are"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:(0,t.jsx)(n.a,{href:"https://temporal.io/",children:"Temporal"})}),"\n",(0,t.jsx)(n.li,{children:(0,t.jsx)(n.a,{href:"https://cadenceworkflow.io/",children:"Uber Cadence"})}),"\n"]}),"\n",(0,t.jsxs)(n.p,{children:["Among these, ",(0,t.jsx)(n.a,{href:"https://temporal.io/",children:"Temporal"})," has grown in influence, used by companies like SnapChat, HashiCorp, Stripe, DoorDash, and DataDog. Its success is driven by its practical application in real-world scenarios and the expertise of its founders."]}),"\n",(0,t.jsxs)(n.p,{children:["At Metatype, we recognize the value of durable execution and are committed to making it accessible. Our ",(0,t.jsx)(n.a,{href:"/docs/reference/runtimes/temporal",children:"Temporal Runtime"})," integrates seamlessly into our declarative API development platform, enabling users to harness the power of Temporal directly within Metatype. For those interested in exploring further, our documentation provides a detailed guide on getting started with ",(0,t.jsx)(n.a,{href:"/docs/reference/runtimes/temporal",children:"Temporal Runtime"}),"."]}),"\n",(0,t.jsx)(n.p,{children:"Below is an example of how you can build a simple API to interact with an order delivery temporal workflow within Metatype."}),"\n",(0,t.jsx)(n.admonition,{type:"note",children:(0,t.jsxs)(n.p,{children:["If you are new to Metatype or haven\u2019t set it up yet in your development environment. You can follow this ",(0,t.jsx)(n.a,{href:"/docs/tutorials/quick-start",children:"guideline"}),"."]})}),"\n",(0,t.jsx)(n.p,{children:"For this example, the order delivery system will have few components/services such as Payment, Inventory and Delivery."}),"\n",(0,t.jsx)(n.p,{children:"Your temporal workflow definition should look similar to the one below."}),"\n",(0,t.jsxs)(o.Ay,{children:[(0,t.jsxs)(i.A,{value:"typescript",children:[(0,t.jsxs)(a,{children:[(0,t.jsxs)("summary",{children:["Activities definition inside ",(0,t.jsx)(n.code,{children:"src/activities.ts"}),":`"]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-typescript",children:'async function sleep(time: number) {\n return new Promise((resolve) => {\n setTimeout(resolve, time);\n });\n}\n\nexport async function processPayment(orderId: string): Promise {\n console.log(`Processing payment for order ${orderId}`);\n // Simulate payment processing logic\n await sleep(2);\n return "Payment processed";\n}\n\nexport async function checkInventory(orderId: string): Promise {\n console.log(`Checking inventory for order ${orderId}`);\n // Simulate inventory check logic\n await sleep(2);\n return "Inventory available";\n}\n\nexport async function deliverOrder(orderId: string): Promise {\n console.log(`Delivering order ${orderId}`);\n // Simulate delivery logic\n await sleep(5);\n return "Order delivered";\n}\n'})})]}),(0,t.jsxs)(a,{children:[(0,t.jsxs)("summary",{children:["Workflow definition inside ",(0,t.jsx)(n.code,{children:"src/workflows.ts"}),":"]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-typescript",metastring:'import {proxyActivities} from "@temporalio/workflow";',children:'\nexport const { processPayment, checkInventory, deliverOrder } =\n proxyActivities<{\n processPayment(orderId: string): Promise;\n checkInventory(orderId: string): Promise;\n deliverOrder(orderId: string): Promise;\n }>({\n startToCloseTimeout: "10 seconds",\n });\n\nexport async function OrderWorkflow(orderId: string): Promise {\n const paymentResult = await processPayment(orderId);\n const inventoryResult = await checkInventory(orderId);\n const deliveryResult = await deliverOrder(orderId);\n return `Order ${orderId} completed with results: ${paymentResult}, ${inventoryResult}, ${deliveryResult}`;\n}\n'})})]}),(0,t.jsxs)(a,{children:[(0,t.jsxs)("summary",{children:["Worker definintion inside ",(0,t.jsx)(n.code,{children:"src/worker.ts"}),":"]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-typescript",children:'import { NativeConnection, Worker } from "@temporalio/worker";\nimport * as activities from "./activities";\nimport { TASK_QUEUE_NAME } from "./shared";\n\nasync function run() {\n const connection = await NativeConnection.connect({\n address: "localhost:7233",\n });\n\n const worker = await Worker.create({\n connection,\n namespace: "default",\n taskQueue: TASK_QUEUE_NAME,\n workflowsPath: require.resolve("./workflows"),\n activities,\n });\n\n await worker.run();\n}\n\nrun().catch((err) => {\n console.error(err);\n process.exit(1);\n});\n'})})]}),(0,t.jsxs)(n.p,{children:["After you have setup the above components, now you need a client to start of any ",(0,t.jsx)(n.code,{children:"OrderWorkflow"}),". Here is where metatype comes in, through the simple APIs ",(0,t.jsx)(n.a,{href:"/docs/reference/runtimes/temporal",children:"Temporal Runtime"})," exposes, you can communicate with your temporal cluster.\nDown below is the workflow communication bridge for this system expressed within a ",(0,t.jsx)(n.a,{href:"/docs/reference/typegraph",children:"typegraph"})," which includes endpoints to start a new workflow and describe an existing one."]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-typescript",children:'import { Policy, t, typegraph } from "@typegraph/sdk/index.ts";\nimport { TemporalRuntime } from "@typegraph/sdk/providers/temporal.ts";\n\ntypegraph(\n {\n name: "order_delivery",\n },\n (g: any) => {\n const pub = Policy.public();\n\n const temporal = new TemporalRuntime({\n name: "order_delivery",\n hostSecret: "HOST",\n namespaceSecret: "NAMESPACE",\n });\n\n const workflow_id = "order-delivery-1";\n\n const order_id = t.string();\n\n g.expose(\n {\n start: temporal.startWorkflow("OrderWorkflow", order_id),\n describe: workflow_id\n ? temporal.describeWorkflow().reduce({ workflow_id })\n : temporal.describeWorkflow(),\n },\n pub,\n );\n },\n);\n'})})]}),(0,t.jsxs)(i.A,{value:"python",children:[(0,t.jsxs)(a,{children:[(0,t.jsxs)("summary",{children:["Activities definition inside ",(0,t.jsx)(n.code,{children:"activities.py"}),"."]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-python",children:'from temporalio import activity\nimport time\n\n@activity.defn\nasync def process_payment(order_id: str) -> str:\n print(f"Processing payment for order {order_id}")\n # Simulate payment processing logic\n time.sleep(5)\n return "Payment processed"\n\n@activity.defn\nasync def check_inventory(order_id: str) -> str:\n print(f"Checking inventory for order {order_id}")\n # Simulate inventory check logic\n time.sleep(4)\n return "Inventory available"\n\n@activity.defn\nasync def deliver_order(order_id: str) -> str:\n print(f"Delivering order {order_id}")\n time.sleep(8)\n # Simulate delivery logic\n return "Order delivered"\n'})})]}),(0,t.jsxs)(a,{children:[(0,t.jsxs)("summary",{children:["Worker defintion inside ",(0,t.jsx)(n.code,{children:"run_worker.py"}),"."]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-python",children:'import asyncio\n\nfrom temporalio.client import Client\nfrom temporalio.worker import Worker\n\nfrom activities import process_payment, deliver_order, check_inventory\nfrom shared import ORDER_DELIVERY_QUEUE\nfrom workflows import OrderWorkflow\n\n\nasync def main() -> None:\n client: Client = await Client.connect("localhost:7233", namespace="default")\n worker: Worker = Worker(\n client,\n task_queue=ORDER_DELIVERY_QUEUE,\n workflows=[OrderWorkflow],\n activities=[process_payment, check_inventory, deliver_order],\n )\n await worker.run()\n\n\nif __name__ == "__main__":\n asyncio.run(main())\n'})})]}),(0,t.jsxs)(n.p,{children:["After you have setup the above components, now you need a client to start of any ",(0,t.jsx)(n.code,{children:"OrderWorkflow"}),". Here is where metatype comes in, through the simple APIs ",(0,t.jsx)(n.a,{href:"/docs/reference/runtimes/temporal",children:"Temporal Runtime"})," exposes, you can communicate with your temporal cluster.\nDown below is the workflow communication bridge for this system expressed within a ",(0,t.jsx)(n.a,{href:"/docs/reference/typegraph",children:"typegraph"})," which includes endpoints to start a new workflow and describe an existing one."]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-python",children:'from typegraph import t, typegraph, Policy, Graph\nfrom typegraph.providers.temporal import TemporalRuntime\n\n\n@typegraph()\ndef example(g: Graph):\n public = Policy.public()\n\n temporal = TemporalRuntime(\n "example", "HOST", namespace_secret="NAMESPACE"\n )\n\n workflow_id = "order-delivery-1"\n\n order_id = t.string()\n\n g.expose(\n public,\n start=temporal.start_workflow("OrderWorkflow", order_id),\n describe=temporal.describe_workflow().reduce({"workflow_id": workflow_id})\n if workflow_id\n else temporal.describe_workflow(),\n )\n'})})]})]}),"\n",(0,t.jsxs)(n.p,{children:["You need to add the secrets ",(0,t.jsx)(n.code,{children:"HOST"})," and ",(0,t.jsx)(n.code,{children:"NAMESPACE"})," under your typegraph name inside the ",(0,t.jsx)(n.code,{children:"metatype.yaml"})," file. These secrets are important to connect with your temporal cluster and can be safely stored in the config file as shown below."]}),"\n",(0,t.jsxs)(a,{children:[(0,t.jsx)("summary",{children:"metatype.yaml"}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-yaml",children:'typegate:\n dev:\n url: "http://localhost:7890"\n username: admin\n password: password\n secrets:\n example:\n POSTGRES: "postgresql://postgres:password@postgres:5432/db"\n HOST: "http://localhost:7233"\n NAMESPACE: "default"\n'})})]}),"\n",(0,t.jsxs)(n.p,{children:["You need to add only the last two lines as the others are auto-generated. Note that secrets are defined under the ",(0,t.jsx)(n.code,{children:"example"})," parent, which is the name of your typegraph. If the name doesn't match, you will face secret not found issues when deploying your typegraph."]}),"\n",(0,t.jsxs)(n.p,{children:["Before deploying the above typegraph, you need to start the temporal server and the worker. You need to have ",(0,t.jsx)(n.a,{href:"https://learn.temporal.io/getting_started/typescript/dev_environment/#set-up-a-local-temporal-service-for-development-with-temporal-cli",children:"temporal"})," installed on your machine."]}),"\n",(0,t.jsxs)(a,{children:[(0,t.jsx)("summary",{children:"Boot up temporal"}),(0,t.jsx)(n.p,{children:"Start the temporal server."}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-bash",children:"temporal server start-dev\n"})}),(0,t.jsx)(n.p,{children:"Start the worker."}),(0,t.jsxs)(o.Ay,{children:[(0,t.jsx)(i.A,{value:"typescript",children:(0,t.jsx)(n.p,{children:(0,t.jsx)(n.code,{children:"typescript npx ts-node src/worker.ts "})})}),(0,t.jsx)(i.A,{value:"python",children:(0,t.jsx)(n.code,{children:"python python run_worker.py "})})]})]}),"\n",(0,t.jsxs)(n.p,{children:["After booting the temporal server, run the command down below to get a locally spinning ",(0,t.jsx)(n.a,{href:"/docs/reference/typegate",children:"typegate"})," instance with your typegraph deployed."]}),"\n",(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-bash",children:"meta dev\n"})}),"\n",(0,t.jsxs)(n.p,{children:["After completing the above steps, you can access the web GraphQL client of the typegate at ",(0,t.jsx)(n.a,{href:"http://localhost:7890/example",children:(0,t.jsx)(n.code,{children:"http://localhost:7890/example"})}),". Run this query inside the client to start your workflow."]}),"\n",(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-graphql",children:'mutation {\n start(\n workflow_id: "order-delivery-3"\n task_queue: "order-delivery-queue"\n args: ["order12"]\n )\n}\n'})}),"\n",(0,t.jsxs)(n.p,{children:["After a successful run, you will get the following result which includes the ",(0,t.jsx)(n.code,{children:"run_id"})," of the workflow which has just been started."]}),"\n",(0,t.jsx)("img",{src:"/images/blog/execution-flow-paradigms/start-workflow-result.png",alt:"Query result"}),"\n",(0,t.jsx)(n.p,{children:"You can also check the temporal web UI to monitor your workflows and you should see a result similar to this one."}),"\n",(0,t.jsx)("img",{src:"/images/blog/execution-flow-paradigms/temporal-web-ui.png",alt:"Workflows dashboard"}),"\n",(0,t.jsxs)(n.p,{children:["You can explore the ",(0,t.jsx)(n.a,{href:"/docs/reference/runtimes/temporal",children:"Temporal Runtime"})," for more info."]}),"\n",(0,t.jsx)(n.p,{children:"This wraps up the blog, thanks for reading until the end :)"})]})}function u(e={}){const{wrapper:n}={...(0,s.R)(),...e.components};return n?(0,t.jsx)(n,{...e,children:(0,t.jsx)(p,{...e})}):p(e)}},65480:(e,n,r)=>{r.d(n,{Ay:()=>o,gc:()=>a});r(30758);var t=r(3733),s=r(56315),i=r(86070);function o(e){let{children:n}=e;const[r,o]=(0,t.e)();return(0,i.jsx)(s.mS,{choices:{typescript:"Typescript SDK",python:"Python SDK"},choice:r,onChange:o,children:n})}function a(e){let{children:n}=e;const[r]=(0,t.e)();return(0,i.jsx)(s.q9,{choices:{typescript:"Typescript SDK",python:"Python SDK"},choice:r,children:n})}},16676:(e,n,r)=>{r.d(n,{A:()=>t});const t=r.p+"assets/images/eda.drawio-9d730aef7e9f00ffed737626d602be5c.svg"},35936:(e,n,r)=>{r.d(n,{A:()=>t});const t=r.p+"assets/images/saga.drawio-6f492c8332ead1021dde63fa7daf0efd.svg"}}]);
\ No newline at end of file
diff --git a/assets/js/6b5a7be1.510e9cd3.js b/assets/js/6b5a7be1.510e9cd3.js
new file mode 100644
index 0000000000..9530f8ccde
--- /dev/null
+++ b/assets/js/6b5a7be1.510e9cd3.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkwebsite=self.webpackChunkwebsite||[]).push([[2920],{65977:(e,t,n)=>{n.r(t),n.d(t,{assets:()=>l,contentTitle:()=>o,default:()=>p,frontMatter:()=>s,metadata:()=>a,toc:()=>c});var i=n(86070),r=n(25710);const s={},o="Introducing gRPC Runtime",a={permalink:"/blog/2024/09/26/introducing-grpc-runtime",editUrl:"https://github.com/metatypedev/metatype/tree/main/docs/metatype.dev/blog/2024-09-26-introducing-grpc-runtime/index.mdx",source:"@site/blog/2024-09-26-introducing-grpc-runtime/index.mdx",title:"Introducing gRPC Runtime",description:"We're excited to announce the new gRPC Runtime feature in Metatype, further enhancing our platform's ability to create versatile and powerful backends through typegraphs.",date:"2024-09-26T00:00:00.000Z",tags:[],readingTime:4.4,hasTruncateMarker:!1,authors:[],frontMatter:{},unlisted:!1,nextItem:{title:"Distributed execution flow paradigms",permalink:"/blog/2024/08/27/distributed-execution-flow-paradigms"}},l={authorsImageUrls:[]},c=[{value:"What is gRPC?",id:"what-is-grpc",level:2},{value:"Why gRPC Matters for Metatype",id:"why-grpc-matters-for-metatype",level:2},{value:"Diagram: gRPC and Metatype Integration",id:"diagram-grpc-and-metatype-integration",level:2},{value:"Introducing gRPC Runtime in Metatype",id:"introducing-grpc-runtime-in-metatype",level:2},{value:"Key Technical Details",id:"key-technical-details",level:2},{value:"Architecture",id:"architecture",level:3},{value:"Implementation",id:"implementation",level:3},{value:"Benefits for Developers",id:"benefits-for-developers",level:2},{value:"Getting Started",id:"getting-started",level:2},{value:"Conclusion",id:"conclusion",level:2}];function h(e){const t={a:"a",code:"code",em:"em",h2:"h2",h3:"h3",img:"img",li:"li",ol:"ol",p:"p",pre:"pre",strong:"strong",ul:"ul",...(0,r.R)(),...e.components};return(0,i.jsxs)(i.Fragment,{children:[(0,i.jsx)(t.p,{children:"We're excited to announce the new gRPC Runtime feature in Metatype, further enhancing our platform's ability to create versatile and powerful backends through typegraphs."}),"\n",(0,i.jsx)(t.h2,{id:"what-is-grpc",children:"What is gRPC?"}),"\n",(0,i.jsxs)(t.p,{children:["gRPC, or ",(0,i.jsx)(t.strong,{children:"g"}),"oogle ",(0,i.jsx)(t.strong,{children:"R"}),"emote ",(0,i.jsx)(t.strong,{children:"P"}),"rocedure ",(0,i.jsx)(t.strong,{children:"C"}),"all, is a high-performance, open-source communication framework initially developed by Google. It enables ",(0,i.jsx)(t.strong,{children:"efficient and fast communication between microservices"})," in a distributed system, making it ideal for modern backend architectures."]}),"\n",(0,i.jsxs)(t.p,{children:["Unlike traditional HTTP APIs that use JSON, gRPC relies on ",(0,i.jsx)(t.strong,{children:"Protocol Buffers"})," (protobufs) for serializing data, which are more compact and faster to process. This approach allows gRPC to support high-throughput, low-latency communication, which is crucial for applications where speed and efficiency matter, such as in real-time data processing or large-scale distributed systems."]}),"\n",(0,i.jsx)(t.p,{children:"Key benefits of gRPC include:"}),"\n",(0,i.jsxs)(t.ul,{children:["\n",(0,i.jsxs)(t.li,{children:[(0,i.jsx)(t.strong,{children:"Cross-language support"}),": gRPC supports multiple programming languages, allowing services written in different languages to communicate seamlessly."]}),"\n",(0,i.jsxs)(t.li,{children:[(0,i.jsx)(t.strong,{children:"Strong type safety"}),": Protocol Buffers ensure type-safe communication, catching errors early and improving reliability."]}),"\n",(0,i.jsxs)(t.li,{children:[(0,i.jsx)(t.strong,{children:"Bidirectional streaming"}),": gRPC allows for client and server streaming, enabling continuous data transfer in both directions, ideal for applications like real-time analytics."]}),"\n"]}),"\n",(0,i.jsx)(t.p,{children:"In short, gRPC is well-suited for high-performance, scalable backend systems where speed and type safety are essential."}),"\n",(0,i.jsx)(t.h2,{id:"why-grpc-matters-for-metatype",children:"Why gRPC Matters for Metatype"}),"\n",(0,i.jsxs)(t.p,{children:["Metatype is a platform that enables developers to create ",(0,i.jsx)(t.strong,{children:"typegraphs"}),"\u2014strongly-typed, composable backend structures that can support multiple protocols and runtime environments. With the introduction of the gRPC Runtime, Metatype allows developers to incorporate gRPC services into these typegraphs, further enhancing the platform\u2019s versatility."]}),"\n",(0,i.jsx)(t.p,{children:"By integrating gRPC, Metatype empowers developers to:"}),"\n",(0,i.jsxs)(t.ul,{children:["\n",(0,i.jsxs)(t.li,{children:[(0,i.jsx)(t.strong,{children:"Expose gRPC services via GraphQL or HTTP endpoints"}),", making them accessible to clients in a way that best suits their needs."]}),"\n",(0,i.jsxs)(t.li,{children:[(0,i.jsx)(t.strong,{children:"Compose gRPC services with other backend components"}),", such as databases or other APIs, to create powerful and cohesive backend systems."]}),"\n"]}),"\n",(0,i.jsx)(t.h2,{id:"diagram-grpc-and-metatype-integration",children:"Diagram: gRPC and Metatype Integration"}),"\n",(0,i.jsx)("center",{children:(0,i.jsx)(t.p,{children:(0,i.jsx)(t.img,{alt:"gRPC and Metatype Integration Diagram",src:n(15602).A+"",width:"161",height:"681"})})}),"\n",(0,i.jsx)(t.p,{children:(0,i.jsx)(t.em,{children:"Metatype\u2019s gRPC Runtime allows developers to integrate gRPC services into their typegraphs, enabling seamless interaction with gRPC services in the backend."})}),"\n",(0,i.jsx)(t.h2,{id:"introducing-grpc-runtime-in-metatype",children:"Introducing gRPC Runtime in Metatype"}),"\n",(0,i.jsx)(t.p,{children:"The new gRPC Runtime is the latest addition to Metatype's suite of runtimes, joining existing options like the HTTP runtime. This expansion allows you to incorporate gRPC services into your typegraphs, further enhancing the versatility of your Metatype-powered backends."}),"\n",(0,i.jsx)(t.h2,{id:"key-technical-details",children:"Key Technical Details"}),"\n",(0,i.jsx)(t.h3,{id:"architecture",children:"Architecture"}),"\n",(0,i.jsx)(t.p,{children:"The gRPC Runtime integrates seamlessly with Metatype's existing architecture. It acts as a bridge between your typegraph and external gRPC services, allowing you to incorporate gRPC calls alongside other runtime operations in your backend logic."}),"\n",(0,i.jsxs)(t.ol,{children:["\n",(0,i.jsxs)(t.li,{children:[(0,i.jsx)(t.strong,{children:"GrpcRuntime Class"}),": The main interface for defining gRPC interactions within your typegraph."]}),"\n",(0,i.jsxs)(t.li,{children:[(0,i.jsx)(t.strong,{children:"proto_file"}),": Path to the .proto file that defines the gRPC service."]}),"\n",(0,i.jsxs)(t.li,{children:[(0,i.jsx)(t.strong,{children:"endpoint"}),": The gRPC server address in the format ",(0,i.jsx)(t.code,{children:"tcp://:"}),"."]}),"\n",(0,i.jsxs)(t.li,{children:[(0,i.jsx)(t.strong,{children:"call method"}),": Creates a typegraph function for gRPC method calls."]}),"\n"]}),"\n",(0,i.jsx)(t.h3,{id:"implementation",children:"Implementation"}),"\n",(0,i.jsx)(t.p,{children:"Here's how the gRPC Runtime fits into a Metatype typegraph:"}),"\n",(0,i.jsx)(t.pre,{children:(0,i.jsx)(t.code,{className:"language-python",children:'from typegraph import Graph, Policy, typegraph\nfrom typegraph.graph.params import Cors\nfrom typegraph.runtimes.grpc import GrpcRuntime\n\n@typegraph(\n cors=Cors(allow_origin=["https://metatype.dev", "http://localhost:3000"]),\n)\ndef create_grpc_typegraph(g: Graph):\n # The GrpcRuntime acts as a bridge between your typegraph and external gRPC services\n grpc_runtime = GrpcRuntime(\n # proto_file: Path to the .proto file that defines the gRPC service\n proto_file="proto/helloworld.proto",\n # endpoint: The gRPC server address in the format tcp://:\n endpoint="tcp://localhost:4770"\n )\n \n # Expose the gRPC service within your typegraph\n # This allows you to incorporate gRPC calls alongside other runtime operations\n g.expose(\n Policy.public(),\n # call method: Creates a typegraph function for gRPC method calls\n # It uses the full path to the gRPC method: /package_name.service_name/method_name\n greet=grpc_runtime.call("/helloworld.Greeter/SayHello"),\n )\n\n# The typegraph can now be exposed via GraphQL or HTTP, \n# allowing clients to interact with the gRPC service through Metatype\'s unified interface\n'})}),"\n",(0,i.jsx)(t.p,{children:"This implementation demonstrates how the gRPC Runtime integrates with your typegraph, allowing you to:"}),"\n",(0,i.jsxs)(t.ol,{children:["\n",(0,i.jsx)(t.li,{children:"Define gRPC service connections using the GrpcRuntime class"}),"\n",(0,i.jsx)(t.li,{children:"Expose gRPC methods as part of your typegraph"}),"\n",(0,i.jsx)(t.li,{children:"Combine gRPC functionality with other Metatype features and runtimes"}),"\n"]}),"\n",(0,i.jsx)(t.p,{children:"By structuring your gRPC interactions this way, you can seamlessly incorporate gRPC services into your larger Metatype-powered backend, alongside other data sources and business logic."}),"\n",(0,i.jsx)(t.h2,{id:"benefits-for-developers",children:"Benefits for Developers"}),"\n",(0,i.jsxs)(t.ol,{children:["\n",(0,i.jsxs)(t.li,{children:[(0,i.jsx)(t.strong,{children:"Unified Backend Structure"}),": Incorporate gRPC services alongside other protocols and data sources in a single, coherent typegraph."]}),"\n",(0,i.jsxs)(t.li,{children:[(0,i.jsx)(t.strong,{children:"Type Safety"}),": Leverage Metatype's strong typing system in conjunction with gRPC's protocol buffers for end-to-end type safety."]}),"\n",(0,i.jsxs)(t.li,{children:[(0,i.jsx)(t.strong,{children:"Flexible Exposure"}),": Easily expose your gRPC services via GraphQL or HTTP endpoints, allowing clients to interact with them using their preferred protocol."]}),"\n",(0,i.jsxs)(t.li,{children:[(0,i.jsx)(t.strong,{children:"Composability"}),": Combine gRPC calls with other runtime operations, database queries, or business logic within your typegraph."]}),"\n"]}),"\n",(0,i.jsx)(t.h2,{id:"getting-started",children:"Getting Started"}),"\n",(0,i.jsx)(t.p,{children:"To start using the gRPC Runtime in your Metatype project:"}),"\n",(0,i.jsxs)(t.ol,{children:["\n",(0,i.jsx)(t.li,{children:"Ensure you have the latest version of Metatype installed."}),"\n",(0,i.jsx)(t.li,{children:"Prepare your .proto files for the gRPC services you want to integrate."}),"\n",(0,i.jsx)(t.li,{children:"Set up your typegraph as shown in the example above, incorporating the GrpcRuntime."}),"\n",(0,i.jsx)(t.li,{children:"Configure your Metatype backend to expose the typegraph via GraphQL or HTTP as needed."}),"\n"]}),"\n",(0,i.jsx)(t.h2,{id:"conclusion",children:"Conclusion"}),"\n",(0,i.jsx)(t.p,{children:"The addition of the gRPC Runtime to Metatype further solidifies its position as a comprehensive platform for building robust, type-safe backends. By allowing seamless integration of gRPC services alongside other protocols and data sources, Metatype empowers developers to create versatile and powerful backend systems with ease."}),"\n",(0,i.jsxs)(t.p,{children:["For more detailed documentation, code examples, and best practices, check out our ",(0,i.jsx)(t.a,{href:"https://metatype.dev/docs",children:"official Metatype docs"}),"#."]})]})}function p(e={}){const{wrapper:t}={...(0,r.R)(),...e.components};return t?(0,i.jsx)(t,{...e,children:(0,i.jsx)(h,{...e})}):h(e)}},15602:(e,t,n)=>{n.d(t,{A:()=>i});const i=n.p+"assets/images/GrpcMetatype.drawio-e6e626affae448f5b44a48ab82805ee9.png"}}]);
\ No newline at end of file
diff --git a/assets/js/6e544dd5.67ab3519.js b/assets/js/6e544dd5.67ab3519.js
deleted file mode 100644
index c8ce4a4c5c..0000000000
--- a/assets/js/6e544dd5.67ab3519.js
+++ /dev/null
@@ -1 +0,0 @@
-"use strict";(self.webpackChunkwebsite=self.webpackChunkwebsite||[]).push([[3126],{2845:(e,n,r)=>{r.r(n),r.d(n,{assets:()=>d,contentTitle:()=>l,default:()=>u,frontMatter:()=>a,metadata:()=>c,toc:()=>h});var t=r(86070),s=r(25710),i=r(27676),o=r(65480);const a={},l="Distributed execution flow paradigms",c={permalink:"/blog/2024/08/27/distributed-execution-flow-paradigms",editUrl:"https://github.com/metatypedev/metatype/tree/main/docs/metatype.dev/blog/2024-08-27-distributed-execution-flow-paradigms/index.mdx",source:"@site/blog/2024-08-27-distributed-execution-flow-paradigms/index.mdx",title:"Distributed execution flow paradigms",description:"In this age of cloud development and microservices architecture, problems start to arise with the increased workloads that run in the system. Imagine an e-commerce platform where a customer places an order for a product during a high-demand sale event. The order triggers a series of interconnected processes: payment processing, inventory checks, packaging, shipping, and final delivery. Each of these processes might be handled by different microservices, potentially running on different servers or even in different data centers. What happens if the payment service goes down right after the payment is authorized but before the inventory is updated? Or if the packaging service fails just after the inventory is deducted but before the item is packed? Without a robust mechanism to ensure that each step in the workflow completes successfully and that failures are properly handled, you could end up with unhappy customers, lost orders, and inventory discrepancies.",date:"2024-08-27T00:00:00.000Z",tags:[],readingTime:10.92,hasTruncateMarker:!1,authors:[],frontMatter:{},unlisted:!1,nextItem:{title:"Python on WebAssembly: How?",permalink:"/blog/2024/08/26/python-on-webassembly"}},d={authorsImageUrls:[]},h=[{value:"1. Event-Driven Architecture with Message Queues",id:"1-event-driven-architecture-with-message-queues",level:3},{value:"Advantages",id:"advantages",level:4},{value:"Challenges",id:"challenges",level:4},{value:"2. The Saga Pattern",id:"2-the-saga-pattern",level:3},{value:"Advantages",id:"advantages-1",level:4},{value:"Drawbacks",id:"drawbacks",level:4},{value:"3. Stateful Orchestrators",id:"3-stateful-orchestrators",level:3},{value:"Advantages",id:"advantages-2",level:4},{value:"Challenges",id:"challenges-1",level:4},{value:"4. Durable Execution",id:"4-durable-execution",level:3},{value:"Advantages",id:"advantages-3",level:4},{value:"Challenges",id:"challenges-2",level:4}];function p(e){const n={a:"a",admonition:"admonition",code:"code",h3:"h3",h4:"h4",img:"img",li:"li",p:"p",pre:"pre",strong:"strong",ul:"ul",...(0,s.R)(),...e.components},{Details:a}=n;return a||function(e,n){throw new Error("Expected "+(n?"component":"object")+" `"+e+"` to be defined: you likely forgot to import, pass, or provide it.")}("Details",!0),(0,t.jsxs)(t.Fragment,{children:[(0,t.jsx)(n.p,{children:"In this age of cloud development and microservices architecture, problems start to arise with the increased workloads that run in the system. Imagine an e-commerce platform where a customer places an order for a product during a high-demand sale event. The order triggers a series of interconnected processes: payment processing, inventory checks, packaging, shipping, and final delivery. Each of these processes might be handled by different microservices, potentially running on different servers or even in different data centers. What happens if the payment service goes down right after the payment is authorized but before the inventory is updated? Or if the packaging service fails just after the inventory is deducted but before the item is packed? Without a robust mechanism to ensure that each step in the workflow completes successfully and that failures are properly handled, you could end up with unhappy customers, lost orders, and inventory discrepancies."}),"\n",(0,t.jsx)(n.p,{children:"Having multiple components in your system introduces more failure points, which is a common phenomenon in complex systems. But one important behavior any application must ensure is that the execution flow reaches its completion. As systems grow in features and complexity, the likelihood of long-running processes increases. To ensure these processes complete as intended, several solutions have been introduced over the last few decades.\nLet's explore some of the solutions that have been proposed to achieve workflow completeness."}),"\n",(0,t.jsx)(n.h3,{id:"1-event-driven-architecture-with-message-queues",children:"1. Event-Driven Architecture with Message Queues"}),"\n",(0,t.jsx)(n.p,{children:"This architecture relies heavily on services communicating by publishing and subscribing to events using message queues. Message queues are persistent storages that ensure data is not lost during failures or service unavailability. Components in a distributed system synchronize by using events/messages through these independent services. While this approach offers service decomposability and fault tolerance, it has some shortcomings. For example, using message queues comes with the overhead of managing messages (e.g., deduplication and message ordering). It also isn\u2019t ideal for systems requiring immediate consistency across components. Some technologies and patterns that utilize this architecture include:"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:(0,t.jsx)(n.a,{href:"https://www.rabbitmq.com/",children:"RabbitMQ"})}),"\n",(0,t.jsx)(n.li,{children:(0,t.jsx)(n.a,{href:"https://aws.amazon.com/sqs/",children:"Amazon SQS"})}),"\n"]}),"\n",(0,t.jsx)(n.p,{children:(0,t.jsx)(n.img,{src:r(16676).A+""})}),"\n",(0,t.jsx)("div",{style:{marginLeft:"5em"},children:(0,t.jsx)(n.p,{children:"Fig. Event Driven Architecture with Message Queues - RabbitMQ"})}),"\n",(0,t.jsx)(n.h4,{id:"advantages",children:"Advantages"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:"Improved Scalability"}),"\n",(0,t.jsx)(n.li,{children:"Enhanced Responsiveness"}),"\n",(0,t.jsx)(n.li,{children:"Enhanced Fault Tolerance"}),"\n",(0,t.jsx)(n.li,{children:"Simplified Complex Workflows"}),"\n",(0,t.jsx)(n.li,{children:"Real-Time Data Processing"}),"\n"]}),"\n",(0,t.jsx)(n.h4,{id:"challenges",children:"Challenges"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:"Event Ordering"}),"\n",(0,t.jsx)(n.li,{children:"Data Consistency"}),"\n",(0,t.jsx)(n.li,{children:"Monitoring and Debugging"}),"\n",(0,t.jsx)(n.li,{children:"Event Deduplication"}),"\n"]}),"\n",(0,t.jsx)(n.p,{children:"You can mitigate or reduce these challenges by following best practices like Event Sourcing, Idempotent Processing, CQRS (Command Query Responsibility Segregation), and Event Versioning."}),"\n",(0,t.jsxs)(n.h3,{id:"2-the-saga-pattern",children:["2. The ",(0,t.jsx)(n.a,{href:"https://microservices.io/patterns/data/saga.html",children:"Saga Pattern"})]}),"\n",(0,t.jsx)(n.p,{children:"This design pattern aims to achieve consistency across different services in a distributed system by breaking complex transactions spanning multiple components into a series of local transactions. Each of these transactions triggers an event or message that starts the next transaction in the sequence. If any local transaction fails to complete, a series of compensating actions roll back the effects of preceding transactions. While the orchestration of local transactions can vary, the pattern aims to achieve consistency in a microservices-based system. Events are designed to be stored in durable storage systems or logs, providing a trail to reconstruct the system to a state after a failure. While the saga pattern is an effective way to ensure consistency, it can be challenging to implement timer/timeout-based workflows and to design and implement the compensating actions for local transactions."}),"\n",(0,t.jsxs)(n.p,{children:[(0,t.jsx)(n.strong,{children:"Note"}),": In the Saga pattern, a compensating transaction must be idempotent and retryable. These principles ensure that transactions can be managed without manual intervention."]}),"\n",(0,t.jsx)(n.p,{children:(0,t.jsx)(n.img,{src:r(35936).A+""})}),"\n",(0,t.jsx)("div",{style:{marginLeft:"10em"},children:(0,t.jsx)(n.p,{children:"Fig. The Saga Pattern for Order delivery system"})}),"\n",(0,t.jsx)(n.h4,{id:"advantages-1",children:"Advantages"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:"Ensures data consistency in a distributed system without tight coupling."}),"\n",(0,t.jsx)(n.li,{children:"Provides Roll back if one of the operations in the sequence fails."}),"\n"]}),"\n",(0,t.jsx)(n.h4,{id:"drawbacks",children:"Drawbacks"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:"Might be challenging to implement initially."}),"\n",(0,t.jsx)(n.li,{children:"Hard to debug."}),"\n",(0,t.jsx)(n.li,{children:"Compensating transactions don\u2019t always work."}),"\n"]}),"\n",(0,t.jsxs)(n.h3,{id:"3-stateful-orchestrators",children:["3. ",(0,t.jsx)(n.a,{href:"https://docs.oracle.com/en/applications/jd-edwards/cross-product/9.2/eotos/creating-a-stateful-orchestration-release-9-2-8-3.html#u30249073",children:"Stateful Orchestrators"})]}),"\n",(0,t.jsx)(n.p,{children:"Stateful orchestrators provide a solution for long-running workflows by maintaining the state of each step in a workflow. Each step in a workflow represents a task, and these tasks are represented as states inside workflows. Workflows are defined as state machines or directed acyclic graphs (DAGs). In this approach, an orchestrator handles task execution order, transitioning, handling retries, and maintaining state. In the event of a failure, the system can recover from the persisted state. Stateful orchestrators offer significant value in fault tolerance, consistency, and observability. It\u2019s one of the solutions proven effective in modern distributed computing. Some well-known services that provide this solution include:"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:(0,t.jsx)(n.a,{href:"https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/overview.html",children:"Apache Airflow"})}),"\n",(0,t.jsx)(n.li,{children:(0,t.jsx)(n.a,{href:"https://azure.microsoft.com/en-us/products/logic-apps",children:"Azure Logic Apps"})}),"\n"]}),"\n",(0,t.jsx)(n.h4,{id:"advantages-2",children:"Advantages"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"High Resiliency"}),": Stateful orchestrators provide high resiliency in case of outages, ensuring that workflows can continue from where they left off."]}),"\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Data Persistence"}),": They allow you to keep, review, or reference data from previous events, which is useful for long-running processes."]}),"\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Extended Runtime"}),": Stateful workflows can continue running for much longer than stateless workflows, making them suitable for complex and long-running tasks."]}),"\n"]}),"\n",(0,t.jsx)(n.h4,{id:"challenges-1",children:"Challenges"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Additional Complexity"}),": They introduce additional complexity, requiring you to manage issues such as load balancing, CPU and memory usage, and networking."]}),"\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Cost"}),": With stateful workflows, you pay for the VMs that are running in the cluster, whereas with stateless workflows, you pay only for the actual compute resources consumed."]}),"\n"]}),"\n",(0,t.jsx)(n.h3,{id:"4-durable-execution",children:"4. Durable Execution"}),"\n",(0,t.jsx)(n.p,{children:"Durable execution refers to the ability of a system to preserve the state of an application and persist execution despite failures or interruptions. Durable execution ensures that for every task, its inputs, outputs, call stack, and local variables are persisted. These constraints, or rather features, allow a system to automatically retry or continue running in the face of infrastructure or system failures, ultimately ensuring completion."}),"\n",(0,t.jsx)(n.p,{children:"Durable execution isn\u2019t a completely distinct solution from the ones listed above but rather incorporates some of their strengths while presenting a more comprehensive approach to achieving consistency, fault tolerance, data integrity, resilience for long-running processes, and observability."}),"\n",(0,t.jsx)("img",{src:"/images/blog/execution-flow-paradigms/durable-exec.svg",alt:"Durable workflow engine - Temporal"}),"\n",(0,t.jsx)("div",{style:{marginLeft:"15em"},children:"Fig. Durable workflow engine"}),"\n",(0,t.jsx)(n.h4,{id:"advantages-3",children:"Advantages"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Reduced Manual Intervention"}),": Minimizes the need for human intervention by handling retries and failures programmatically."]}),"\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Improved Observability"}),": Provides a clear audit trail and visibility into the state of workflows, which aids in debugging and monitoring."]}),"\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Scalability"}),": Scales efficiently across distributed systems while maintaining workflow integrity."]}),"\n"]}),"\n",(0,t.jsx)(n.h4,{id:"challenges-2",children:"Challenges"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Resource Intensive"}),": Persistent state storage and management can consume significant resources, especially in large-scale systems."]}),"\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Latency"}),": The need to persist state and handle retries can introduce latency in the execution flow."]}),"\n"]}),"\n",(0,t.jsx)(n.p,{children:"As durable execution grows to be a fundamental driver of distributed computing, some of the solutions which use this architecture are"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:(0,t.jsx)(n.a,{href:"https://temporal.io/",children:"Temporal"})}),"\n",(0,t.jsx)(n.li,{children:(0,t.jsx)(n.a,{href:"https://cadenceworkflow.io/",children:"Uber Cadence"})}),"\n"]}),"\n",(0,t.jsxs)(n.p,{children:["Among these, ",(0,t.jsx)(n.a,{href:"https://temporal.io/",children:"Temporal"})," has grown in influence, used by companies like SnapChat, HashiCorp, Stripe, DoorDash, and DataDog. Its success is driven by its practical application in real-world scenarios and the expertise of its founders."]}),"\n",(0,t.jsxs)(n.p,{children:["At Metatype, we recognize the value of durable execution and are committed to making it accessible. Our ",(0,t.jsx)(n.a,{href:"/docs/reference/runtimes/temporal",children:"Temporal Runtime"})," integrates seamlessly into our declarative API development platform, enabling users to harness the power of Temporal directly within Metatype. For those interested in exploring further, our documentation provides a detailed guide on getting started with ",(0,t.jsx)(n.a,{href:"/docs/reference/runtimes/temporal",children:"Temporal Runtime"}),"."]}),"\n",(0,t.jsx)(n.p,{children:"Below is an example of how you can build a simple API to interact with an order delivery temporal workflow within Metatype."}),"\n",(0,t.jsx)(n.admonition,{type:"note",children:(0,t.jsxs)(n.p,{children:["If you are new to Metatype or haven\u2019t set it up yet in your development environment. You can follow this ",(0,t.jsx)(n.a,{href:"/docs/tutorials/quick-start",children:"guideline"}),"."]})}),"\n",(0,t.jsx)(n.p,{children:"For this example, the order delivery system will have few components/services such as Payment, Inventory and Delivery."}),"\n",(0,t.jsx)(n.p,{children:"Your temporal workflow definition should look similar to the one below."}),"\n",(0,t.jsxs)(o.Ay,{children:[(0,t.jsxs)(i.A,{value:"typescript",children:[(0,t.jsxs)(a,{children:[(0,t.jsxs)("summary",{children:["Activities definition inside ",(0,t.jsx)(n.code,{children:"src/activities.ts"}),":`"]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-typescript",children:'async function sleep(time: number) {\n return new Promise((resolve) => {\n setTimeout(resolve, time);\n });\n}\n\nexport async function processPayment(orderId: string): Promise {\n console.log(`Processing payment for order ${orderId}`);\n // Simulate payment processing logic\n await sleep(2);\n return "Payment processed";\n}\n\nexport async function checkInventory(orderId: string): Promise {\n console.log(`Checking inventory for order ${orderId}`);\n // Simulate inventory check logic\n await sleep(2);\n return "Inventory available";\n}\n\nexport async function deliverOrder(orderId: string): Promise {\n console.log(`Delivering order ${orderId}`);\n // Simulate delivery logic\n await sleep(5);\n return "Order delivered";\n}\n'})})]}),(0,t.jsxs)(a,{children:[(0,t.jsxs)("summary",{children:["Workflow definition inside ",(0,t.jsx)(n.code,{children:"src/workflows.ts"}),":"]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-typescript",metastring:'import {proxyActivities} from "@temporalio/workflow";',children:'\nexport const { processPayment, checkInventory, deliverOrder } =\n proxyActivities<{\n processPayment(orderId: string): Promise;\n checkInventory(orderId: string): Promise;\n deliverOrder(orderId: string): Promise;\n }>({\n startToCloseTimeout: "10 seconds",\n });\n\nexport async function OrderWorkflow(orderId: string): Promise {\n const paymentResult = await processPayment(orderId);\n const inventoryResult = await checkInventory(orderId);\n const deliveryResult = await deliverOrder(orderId);\n return `Order ${orderId} completed with results: ${paymentResult}, ${inventoryResult}, ${deliveryResult}`;\n}\n'})})]}),(0,t.jsxs)(a,{children:[(0,t.jsxs)("summary",{children:["Worker definintion inside ",(0,t.jsx)(n.code,{children:"src/worker.ts"}),":"]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-typescript",children:'import { NativeConnection, Worker } from "@temporalio/worker";\nimport * as activities from "./activities";\nimport { TASK_QUEUE_NAME } from "./shared";\n\nasync function run() {\n const connection = await NativeConnection.connect({\n address: "localhost:7233",\n });\n\n const worker = await Worker.create({\n connection,\n namespace: "default",\n taskQueue: TASK_QUEUE_NAME,\n workflowsPath: require.resolve("./workflows"),\n activities,\n });\n\n await worker.run();\n}\n\nrun().catch((err) => {\n console.error(err);\n process.exit(1);\n});\n'})})]}),(0,t.jsxs)(n.p,{children:["After you have setup the above components, now you need a client to start of any ",(0,t.jsx)(n.code,{children:"OrderWorkflow"}),". Here is where metatype comes in, through the simple APIs ",(0,t.jsx)(n.a,{href:"/docs/reference/runtimes/temporal",children:"Temporal Runtime"})," exposes, you can communicate with your temporal cluster.\nDown below is the workflow communication bridge for this system expressed within a ",(0,t.jsx)(n.a,{href:"/docs/reference/typegraph",children:"typegraph"})," which includes endpoints to start a new workflow and describe an existing one."]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-typescript",children:'import { Policy, t, typegraph } from "@typegraph/sdk/index.ts";\nimport { TemporalRuntime } from "@typegraph/sdk/providers/temporal.ts";\n\ntypegraph(\n {\n name: "order_delivery",\n },\n (g: any) => {\n const pub = Policy.public();\n\n const temporal = new TemporalRuntime({\n name: "order_delivery",\n hostSecret: "HOST",\n namespaceSecret: "NAMESPACE",\n });\n\n const workflow_id = "order-delivery-1";\n\n const order_id = t.string();\n\n g.expose(\n {\n start: temporal.startWorkflow("OrderWorkflow", order_id),\n describe: workflow_id\n ? temporal.describeWorkflow().reduce({ workflow_id })\n : temporal.describeWorkflow(),\n },\n pub,\n );\n },\n);\n'})})]}),(0,t.jsxs)(i.A,{value:"python",children:[(0,t.jsxs)(a,{children:[(0,t.jsxs)("summary",{children:["Activities definition inside ",(0,t.jsx)(n.code,{children:"activities.py"}),"."]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-python",children:'from temporalio import activity\nimport time\n\n@activity.defn\nasync def process_payment(order_id: str) -> str:\n print(f"Processing payment for order {order_id}")\n # Simulate payment processing logic\n time.sleep(5)\n return "Payment processed"\n\n@activity.defn\nasync def check_inventory(order_id: str) -> str:\n print(f"Checking inventory for order {order_id}")\n # Simulate inventory check logic\n time.sleep(4)\n return "Inventory available"\n\n@activity.defn\nasync def deliver_order(order_id: str) -> str:\n print(f"Delivering order {order_id}")\n time.sleep(8)\n # Simulate delivery logic\n return "Order delivered"\n'})})]}),(0,t.jsxs)(a,{children:[(0,t.jsxs)("summary",{children:["Worker defintion inside ",(0,t.jsx)(n.code,{children:"run_worker.py"}),"."]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-python",children:'import asyncio\n\nfrom temporalio.client import Client\nfrom temporalio.worker import Worker\n\nfrom activities import process_payment, deliver_order, check_inventory\nfrom shared import ORDER_DELIVERY_QUEUE\nfrom workflows import OrderWorkflow\n\n\nasync def main() -> None:\n client: Client = await Client.connect("localhost:7233", namespace="default")\n worker: Worker = Worker(\n client,\n task_queue=ORDER_DELIVERY_QUEUE,\n workflows=[OrderWorkflow],\n activities=[process_payment, check_inventory, deliver_order],\n )\n await worker.run()\n\n\nif __name__ == "__main__":\n asyncio.run(main())\n'})})]}),(0,t.jsxs)(n.p,{children:["After you have setup the above components, now you need a client to start of any ",(0,t.jsx)(n.code,{children:"OrderWorkflow"}),". Here is where metatype comes in, through the simple APIs ",(0,t.jsx)(n.a,{href:"/docs/reference/runtimes/temporal",children:"Temporal Runtime"})," exposes, you can communicate with your temporal cluster.\nDown below is the workflow communication bridge for this system expressed within a ",(0,t.jsx)(n.a,{href:"/docs/reference/typegraph",children:"typegraph"})," which includes endpoints to start a new workflow and describe an existing one."]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-python",children:'from typegraph import t, typegraph, Policy, Graph\nfrom typegraph.providers.temporal import TemporalRuntime\n\n\n@typegraph()\ndef example(g: Graph):\n public = Policy.public()\n\n temporal = TemporalRuntime(\n "example", "HOST", namespace_secret="NAMESPACE"\n )\n\n workflow_id = "order-delivery-1"\n\n order_id = t.string()\n\n g.expose(\n public,\n start=temporal.start_workflow("OrderWorkflow", order_id),\n describe=temporal.describe_workflow().reduce({"workflow_id": workflow_id})\n if workflow_id\n else temporal.describe_workflow(),\n )\n'})})]})]}),"\n",(0,t.jsxs)(n.p,{children:["You need to add the secrets ",(0,t.jsx)(n.code,{children:"HOST"})," and ",(0,t.jsx)(n.code,{children:"NAMESPACE"})," under your typegraph name inside the ",(0,t.jsx)(n.code,{children:"metatype.yaml"})," file. These secrets are important to connect with your temporal cluster and can be safely stored in the config file as shown below."]}),"\n",(0,t.jsxs)(a,{children:[(0,t.jsx)("summary",{children:"metatype.yaml"}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-yaml",children:'typegate:\n dev:\n url: "http://localhost:7890"\n username: admin\n password: password\n secrets:\n example:\n POSTGRES: "postgresql://postgres:password@postgres:5432/db"\n HOST: "http://localhost:7233"\n NAMESPACE: "default"\n'})})]}),"\n",(0,t.jsxs)(n.p,{children:["You need to add only the last two lines as the others are auto-generated. Note that secrets are defined under the ",(0,t.jsx)(n.code,{children:"example"})," parent, which is the name of your typegraph. If the name doesn't match, you will face secret not found issues when deploying your typegraph."]}),"\n",(0,t.jsxs)(n.p,{children:["Before deploying the above typegraph, you need to start the temporal server and the worker. You need to have ",(0,t.jsx)(n.a,{href:"https://learn.temporal.io/getting_started/typescript/dev_environment/#set-up-a-local-temporal-service-for-development-with-temporal-cli",children:"temporal"})," installed on your machine."]}),"\n",(0,t.jsxs)(a,{children:[(0,t.jsx)("summary",{children:"Boot up temporal"}),(0,t.jsx)(n.p,{children:"Start the temporal server."}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-bash",children:"temporal server start-dev\n"})}),(0,t.jsx)(n.p,{children:"Start the worker."}),(0,t.jsxs)(o.Ay,{children:[(0,t.jsx)(i.A,{value:"typescript",children:(0,t.jsx)(n.p,{children:(0,t.jsx)(n.code,{children:"typescript npx ts-node src/worker.ts "})})}),(0,t.jsx)(i.A,{value:"python",children:(0,t.jsx)(n.code,{children:"python python run_worker.py "})})]})]}),"\n",(0,t.jsxs)(n.p,{children:["After booting the temporal server, run the command down below to get a locally spinning ",(0,t.jsx)(n.a,{href:"/docs/reference/typegate",children:"typegate"})," instance with your typegraph deployed."]}),"\n",(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-bash",children:"meta dev\n"})}),"\n",(0,t.jsxs)(n.p,{children:["After completing the above steps, you can access the web GraphQL client of the typegate at ",(0,t.jsx)(n.a,{href:"http://localhost:7890/example",children:(0,t.jsx)(n.code,{children:"http://localhost:7890/example"})}),". Run this query inside the client to start your workflow."]}),"\n",(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-graphql",children:'mutation {\n start(\n workflow_id: "order-delivery-3"\n task_queue: "order-delivery-queue"\n args: ["order12"]\n )\n}\n'})}),"\n",(0,t.jsxs)(n.p,{children:["After a successful run, you will get the following result which includes the ",(0,t.jsx)(n.code,{children:"run_id"})," of the workflow which has just been started."]}),"\n",(0,t.jsx)("img",{src:"/images/blog/execution-flow-paradigms/start-workflow-result.png",alt:"Query result"}),"\n",(0,t.jsx)(n.p,{children:"You can also check the temporal web UI to monitor your workflows and you should see a result similar to this one."}),"\n",(0,t.jsx)("img",{src:"/images/blog/execution-flow-paradigms/temporal-web-ui.png",alt:"Workflows dashboard"}),"\n",(0,t.jsxs)(n.p,{children:["You can explore the ",(0,t.jsx)(n.a,{href:"/docs/reference/runtimes/temporal",children:"Temporal Runtime"})," for more info."]}),"\n",(0,t.jsx)(n.p,{children:"This wraps up the blog, thanks for reading until the end :)"})]})}function u(e={}){const{wrapper:n}={...(0,s.R)(),...e.components};return n?(0,t.jsx)(n,{...e,children:(0,t.jsx)(p,{...e})}):p(e)}},65480:(e,n,r)=>{r.d(n,{Ay:()=>o,gc:()=>a});r(30758);var t=r(3733),s=r(56315),i=r(86070);function o(e){let{children:n}=e;const[r,o]=(0,t.e)();return(0,i.jsx)(s.mS,{choices:{typescript:"Typescript SDK",python:"Python SDK"},choice:r,onChange:o,children:n})}function a(e){let{children:n}=e;const[r]=(0,t.e)();return(0,i.jsx)(s.q9,{choices:{typescript:"Typescript SDK",python:"Python SDK"},choice:r,children:n})}},16676:(e,n,r)=>{r.d(n,{A:()=>t});const t=r.p+"assets/images/eda.drawio-9d730aef7e9f00ffed737626d602be5c.svg"},35936:(e,n,r)=>{r.d(n,{A:()=>t});const t=r.p+"assets/images/saga.drawio-6f492c8332ead1021dde63fa7daf0efd.svg"}}]);
\ No newline at end of file
diff --git a/assets/js/6e544dd5.aa17255f.js b/assets/js/6e544dd5.aa17255f.js
new file mode 100644
index 0000000000..b31cf30d30
--- /dev/null
+++ b/assets/js/6e544dd5.aa17255f.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkwebsite=self.webpackChunkwebsite||[]).push([[3126],{2845:(e,n,r)=>{r.r(n),r.d(n,{assets:()=>d,contentTitle:()=>l,default:()=>u,frontMatter:()=>a,metadata:()=>c,toc:()=>h});var t=r(86070),s=r(25710),i=r(27676),o=r(65480);const a={},l="Distributed execution flow paradigms",c={permalink:"/blog/2024/08/27/distributed-execution-flow-paradigms",editUrl:"https://github.com/metatypedev/metatype/tree/main/docs/metatype.dev/blog/2024-08-27-distributed-execution-flow-paradigms/index.mdx",source:"@site/blog/2024-08-27-distributed-execution-flow-paradigms/index.mdx",title:"Distributed execution flow paradigms",description:"In this age of cloud development and microservices architecture, problems start to arise with the increased workloads that run in the system. Imagine an e-commerce platform where a customer places an order for a product during a high-demand sale event. The order triggers a series of interconnected processes: payment processing, inventory checks, packaging, shipping, and final delivery. Each of these processes might be handled by different microservices, potentially running on different servers or even in different data centers. What happens if the payment service goes down right after the payment is authorized but before the inventory is updated? Or if the packaging service fails just after the inventory is deducted but before the item is packed? Without a robust mechanism to ensure that each step in the workflow completes successfully and that failures are properly handled, you could end up with unhappy customers, lost orders, and inventory discrepancies.",date:"2024-08-27T00:00:00.000Z",tags:[],readingTime:10.92,hasTruncateMarker:!1,authors:[],frontMatter:{},unlisted:!1,prevItem:{title:"Introducing gRPC Runtime",permalink:"/blog/2024/09/26/introducing-grpc-runtime"},nextItem:{title:"Python on WebAssembly: How?",permalink:"/blog/2024/08/26/python-on-webassembly"}},d={authorsImageUrls:[]},h=[{value:"1. Event-Driven Architecture with Message Queues",id:"1-event-driven-architecture-with-message-queues",level:3},{value:"Advantages",id:"advantages",level:4},{value:"Challenges",id:"challenges",level:4},{value:"2. The Saga Pattern",id:"2-the-saga-pattern",level:3},{value:"Advantages",id:"advantages-1",level:4},{value:"Drawbacks",id:"drawbacks",level:4},{value:"3. Stateful Orchestrators",id:"3-stateful-orchestrators",level:3},{value:"Advantages",id:"advantages-2",level:4},{value:"Challenges",id:"challenges-1",level:4},{value:"4. Durable Execution",id:"4-durable-execution",level:3},{value:"Advantages",id:"advantages-3",level:4},{value:"Challenges",id:"challenges-2",level:4}];function p(e){const n={a:"a",admonition:"admonition",code:"code",h3:"h3",h4:"h4",img:"img",li:"li",p:"p",pre:"pre",strong:"strong",ul:"ul",...(0,s.R)(),...e.components},{Details:a}=n;return a||function(e,n){throw new Error("Expected "+(n?"component":"object")+" `"+e+"` to be defined: you likely forgot to import, pass, or provide it.")}("Details",!0),(0,t.jsxs)(t.Fragment,{children:[(0,t.jsx)(n.p,{children:"In this age of cloud development and microservices architecture, problems start to arise with the increased workloads that run in the system. Imagine an e-commerce platform where a customer places an order for a product during a high-demand sale event. The order triggers a series of interconnected processes: payment processing, inventory checks, packaging, shipping, and final delivery. Each of these processes might be handled by different microservices, potentially running on different servers or even in different data centers. What happens if the payment service goes down right after the payment is authorized but before the inventory is updated? Or if the packaging service fails just after the inventory is deducted but before the item is packed? Without a robust mechanism to ensure that each step in the workflow completes successfully and that failures are properly handled, you could end up with unhappy customers, lost orders, and inventory discrepancies."}),"\n",(0,t.jsx)(n.p,{children:"Having multiple components in your system introduces more failure points, which is a common phenomenon in complex systems. But one important behavior any application must ensure is that the execution flow reaches its completion. As systems grow in features and complexity, the likelihood of long-running processes increases. To ensure these processes complete as intended, several solutions have been introduced over the last few decades.\nLet's explore some of the solutions that have been proposed to achieve workflow completeness."}),"\n",(0,t.jsx)(n.h3,{id:"1-event-driven-architecture-with-message-queues",children:"1. Event-Driven Architecture with Message Queues"}),"\n",(0,t.jsx)(n.p,{children:"This architecture relies heavily on services communicating by publishing and subscribing to events using message queues. Message queues are persistent storages that ensure data is not lost during failures or service unavailability. Components in a distributed system synchronize by using events/messages through these independent services. While this approach offers service decomposability and fault tolerance, it has some shortcomings. For example, using message queues comes with the overhead of managing messages (e.g., deduplication and message ordering). It also isn\u2019t ideal for systems requiring immediate consistency across components. Some technologies and patterns that utilize this architecture include:"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:(0,t.jsx)(n.a,{href:"https://www.rabbitmq.com/",children:"RabbitMQ"})}),"\n",(0,t.jsx)(n.li,{children:(0,t.jsx)(n.a,{href:"https://aws.amazon.com/sqs/",children:"Amazon SQS"})}),"\n"]}),"\n",(0,t.jsx)(n.p,{children:(0,t.jsx)(n.img,{src:r(16676).A+""})}),"\n",(0,t.jsx)("div",{style:{marginLeft:"5em"},children:(0,t.jsx)(n.p,{children:"Fig. Event Driven Architecture with Message Queues - RabbitMQ"})}),"\n",(0,t.jsx)(n.h4,{id:"advantages",children:"Advantages"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:"Improved Scalability"}),"\n",(0,t.jsx)(n.li,{children:"Enhanced Responsiveness"}),"\n",(0,t.jsx)(n.li,{children:"Enhanced Fault Tolerance"}),"\n",(0,t.jsx)(n.li,{children:"Simplified Complex Workflows"}),"\n",(0,t.jsx)(n.li,{children:"Real-Time Data Processing"}),"\n"]}),"\n",(0,t.jsx)(n.h4,{id:"challenges",children:"Challenges"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:"Event Ordering"}),"\n",(0,t.jsx)(n.li,{children:"Data Consistency"}),"\n",(0,t.jsx)(n.li,{children:"Monitoring and Debugging"}),"\n",(0,t.jsx)(n.li,{children:"Event Deduplication"}),"\n"]}),"\n",(0,t.jsx)(n.p,{children:"You can mitigate or reduce these challenges by following best practices like Event Sourcing, Idempotent Processing, CQRS (Command Query Responsibility Segregation), and Event Versioning."}),"\n",(0,t.jsxs)(n.h3,{id:"2-the-saga-pattern",children:["2. The ",(0,t.jsx)(n.a,{href:"https://microservices.io/patterns/data/saga.html",children:"Saga Pattern"})]}),"\n",(0,t.jsx)(n.p,{children:"This design pattern aims to achieve consistency across different services in a distributed system by breaking complex transactions spanning multiple components into a series of local transactions. Each of these transactions triggers an event or message that starts the next transaction in the sequence. If any local transaction fails to complete, a series of compensating actions roll back the effects of preceding transactions. While the orchestration of local transactions can vary, the pattern aims to achieve consistency in a microservices-based system. Events are designed to be stored in durable storage systems or logs, providing a trail to reconstruct the system to a state after a failure. While the saga pattern is an effective way to ensure consistency, it can be challenging to implement timer/timeout-based workflows and to design and implement the compensating actions for local transactions."}),"\n",(0,t.jsxs)(n.p,{children:[(0,t.jsx)(n.strong,{children:"Note"}),": In the Saga pattern, a compensating transaction must be idempotent and retryable. These principles ensure that transactions can be managed without manual intervention."]}),"\n",(0,t.jsx)(n.p,{children:(0,t.jsx)(n.img,{src:r(35936).A+""})}),"\n",(0,t.jsx)("div",{style:{marginLeft:"10em"},children:(0,t.jsx)(n.p,{children:"Fig. The Saga Pattern for Order delivery system"})}),"\n",(0,t.jsx)(n.h4,{id:"advantages-1",children:"Advantages"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:"Ensures data consistency in a distributed system without tight coupling."}),"\n",(0,t.jsx)(n.li,{children:"Provides Roll back if one of the operations in the sequence fails."}),"\n"]}),"\n",(0,t.jsx)(n.h4,{id:"drawbacks",children:"Drawbacks"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:"Might be challenging to implement initially."}),"\n",(0,t.jsx)(n.li,{children:"Hard to debug."}),"\n",(0,t.jsx)(n.li,{children:"Compensating transactions don\u2019t always work."}),"\n"]}),"\n",(0,t.jsxs)(n.h3,{id:"3-stateful-orchestrators",children:["3. ",(0,t.jsx)(n.a,{href:"https://docs.oracle.com/en/applications/jd-edwards/cross-product/9.2/eotos/creating-a-stateful-orchestration-release-9-2-8-3.html#u30249073",children:"Stateful Orchestrators"})]}),"\n",(0,t.jsx)(n.p,{children:"Stateful orchestrators provide a solution for long-running workflows by maintaining the state of each step in a workflow. Each step in a workflow represents a task, and these tasks are represented as states inside workflows. Workflows are defined as state machines or directed acyclic graphs (DAGs). In this approach, an orchestrator handles task execution order, transitioning, handling retries, and maintaining state. In the event of a failure, the system can recover from the persisted state. Stateful orchestrators offer significant value in fault tolerance, consistency, and observability. It\u2019s one of the solutions proven effective in modern distributed computing. Some well-known services that provide this solution include:"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:(0,t.jsx)(n.a,{href:"https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/overview.html",children:"Apache Airflow"})}),"\n",(0,t.jsx)(n.li,{children:(0,t.jsx)(n.a,{href:"https://azure.microsoft.com/en-us/products/logic-apps",children:"Azure Logic Apps"})}),"\n"]}),"\n",(0,t.jsx)(n.h4,{id:"advantages-2",children:"Advantages"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"High Resiliency"}),": Stateful orchestrators provide high resiliency in case of outages, ensuring that workflows can continue from where they left off."]}),"\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Data Persistence"}),": They allow you to keep, review, or reference data from previous events, which is useful for long-running processes."]}),"\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Extended Runtime"}),": Stateful workflows can continue running for much longer than stateless workflows, making them suitable for complex and long-running tasks."]}),"\n"]}),"\n",(0,t.jsx)(n.h4,{id:"challenges-1",children:"Challenges"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Additional Complexity"}),": They introduce additional complexity, requiring you to manage issues such as load balancing, CPU and memory usage, and networking."]}),"\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Cost"}),": With stateful workflows, you pay for the VMs that are running in the cluster, whereas with stateless workflows, you pay only for the actual compute resources consumed."]}),"\n"]}),"\n",(0,t.jsx)(n.h3,{id:"4-durable-execution",children:"4. Durable Execution"}),"\n",(0,t.jsx)(n.p,{children:"Durable execution refers to the ability of a system to preserve the state of an application and persist execution despite failures or interruptions. Durable execution ensures that for every task, its inputs, outputs, call stack, and local variables are persisted. These constraints, or rather features, allow a system to automatically retry or continue running in the face of infrastructure or system failures, ultimately ensuring completion."}),"\n",(0,t.jsx)(n.p,{children:"Durable execution isn\u2019t a completely distinct solution from the ones listed above but rather incorporates some of their strengths while presenting a more comprehensive approach to achieving consistency, fault tolerance, data integrity, resilience for long-running processes, and observability."}),"\n",(0,t.jsx)("img",{src:"/images/blog/execution-flow-paradigms/durable-exec.svg",alt:"Durable workflow engine - Temporal"}),"\n",(0,t.jsx)("div",{style:{marginLeft:"15em"},children:"Fig. Durable workflow engine"}),"\n",(0,t.jsx)(n.h4,{id:"advantages-3",children:"Advantages"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Reduced Manual Intervention"}),": Minimizes the need for human intervention by handling retries and failures programmatically."]}),"\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Improved Observability"}),": Provides a clear audit trail and visibility into the state of workflows, which aids in debugging and monitoring."]}),"\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Scalability"}),": Scales efficiently across distributed systems while maintaining workflow integrity."]}),"\n"]}),"\n",(0,t.jsx)(n.h4,{id:"challenges-2",children:"Challenges"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Resource Intensive"}),": Persistent state storage and management can consume significant resources, especially in large-scale systems."]}),"\n",(0,t.jsxs)(n.li,{children:[(0,t.jsx)(n.strong,{children:"Latency"}),": The need to persist state and handle retries can introduce latency in the execution flow."]}),"\n"]}),"\n",(0,t.jsx)(n.p,{children:"As durable execution grows to be a fundamental driver of distributed computing, some of the solutions which use this architecture are"}),"\n",(0,t.jsxs)(n.ul,{children:["\n",(0,t.jsx)(n.li,{children:(0,t.jsx)(n.a,{href:"https://temporal.io/",children:"Temporal"})}),"\n",(0,t.jsx)(n.li,{children:(0,t.jsx)(n.a,{href:"https://cadenceworkflow.io/",children:"Uber Cadence"})}),"\n"]}),"\n",(0,t.jsxs)(n.p,{children:["Among these, ",(0,t.jsx)(n.a,{href:"https://temporal.io/",children:"Temporal"})," has grown in influence, used by companies like SnapChat, HashiCorp, Stripe, DoorDash, and DataDog. Its success is driven by its practical application in real-world scenarios and the expertise of its founders."]}),"\n",(0,t.jsxs)(n.p,{children:["At Metatype, we recognize the value of durable execution and are committed to making it accessible. Our ",(0,t.jsx)(n.a,{href:"/docs/reference/runtimes/temporal",children:"Temporal Runtime"})," integrates seamlessly into our declarative API development platform, enabling users to harness the power of Temporal directly within Metatype. For those interested in exploring further, our documentation provides a detailed guide on getting started with ",(0,t.jsx)(n.a,{href:"/docs/reference/runtimes/temporal",children:"Temporal Runtime"}),"."]}),"\n",(0,t.jsx)(n.p,{children:"Below is an example of how you can build a simple API to interact with an order delivery temporal workflow within Metatype."}),"\n",(0,t.jsx)(n.admonition,{type:"note",children:(0,t.jsxs)(n.p,{children:["If you are new to Metatype or haven\u2019t set it up yet in your development environment. You can follow this ",(0,t.jsx)(n.a,{href:"/docs/tutorials/quick-start",children:"guideline"}),"."]})}),"\n",(0,t.jsx)(n.p,{children:"For this example, the order delivery system will have few components/services such as Payment, Inventory and Delivery."}),"\n",(0,t.jsx)(n.p,{children:"Your temporal workflow definition should look similar to the one below."}),"\n",(0,t.jsxs)(o.Ay,{children:[(0,t.jsxs)(i.A,{value:"typescript",children:[(0,t.jsxs)(a,{children:[(0,t.jsxs)("summary",{children:["Activities definition inside ",(0,t.jsx)(n.code,{children:"src/activities.ts"}),":`"]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-typescript",children:'async function sleep(time: number) {\n return new Promise((resolve) => {\n setTimeout(resolve, time);\n });\n}\n\nexport async function processPayment(orderId: string): Promise {\n console.log(`Processing payment for order ${orderId}`);\n // Simulate payment processing logic\n await sleep(2);\n return "Payment processed";\n}\n\nexport async function checkInventory(orderId: string): Promise {\n console.log(`Checking inventory for order ${orderId}`);\n // Simulate inventory check logic\n await sleep(2);\n return "Inventory available";\n}\n\nexport async function deliverOrder(orderId: string): Promise {\n console.log(`Delivering order ${orderId}`);\n // Simulate delivery logic\n await sleep(5);\n return "Order delivered";\n}\n'})})]}),(0,t.jsxs)(a,{children:[(0,t.jsxs)("summary",{children:["Workflow definition inside ",(0,t.jsx)(n.code,{children:"src/workflows.ts"}),":"]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-typescript",metastring:'import {proxyActivities} from "@temporalio/workflow";',children:'\nexport const { processPayment, checkInventory, deliverOrder } =\n proxyActivities<{\n processPayment(orderId: string): Promise;\n checkInventory(orderId: string): Promise;\n deliverOrder(orderId: string): Promise;\n }>({\n startToCloseTimeout: "10 seconds",\n });\n\nexport async function OrderWorkflow(orderId: string): Promise {\n const paymentResult = await processPayment(orderId);\n const inventoryResult = await checkInventory(orderId);\n const deliveryResult = await deliverOrder(orderId);\n return `Order ${orderId} completed with results: ${paymentResult}, ${inventoryResult}, ${deliveryResult}`;\n}\n'})})]}),(0,t.jsxs)(a,{children:[(0,t.jsxs)("summary",{children:["Worker definintion inside ",(0,t.jsx)(n.code,{children:"src/worker.ts"}),":"]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-typescript",children:'import { NativeConnection, Worker } from "@temporalio/worker";\nimport * as activities from "./activities";\nimport { TASK_QUEUE_NAME } from "./shared";\n\nasync function run() {\n const connection = await NativeConnection.connect({\n address: "localhost:7233",\n });\n\n const worker = await Worker.create({\n connection,\n namespace: "default",\n taskQueue: TASK_QUEUE_NAME,\n workflowsPath: require.resolve("./workflows"),\n activities,\n });\n\n await worker.run();\n}\n\nrun().catch((err) => {\n console.error(err);\n process.exit(1);\n});\n'})})]}),(0,t.jsxs)(n.p,{children:["After you have setup the above components, now you need a client to start of any ",(0,t.jsx)(n.code,{children:"OrderWorkflow"}),". Here is where metatype comes in, through the simple APIs ",(0,t.jsx)(n.a,{href:"/docs/reference/runtimes/temporal",children:"Temporal Runtime"})," exposes, you can communicate with your temporal cluster.\nDown below is the workflow communication bridge for this system expressed within a ",(0,t.jsx)(n.a,{href:"/docs/reference/typegraph",children:"typegraph"})," which includes endpoints to start a new workflow and describe an existing one."]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-typescript",children:'import { Policy, t, typegraph } from "@typegraph/sdk/index.ts";\nimport { TemporalRuntime } from "@typegraph/sdk/providers/temporal.ts";\n\ntypegraph(\n {\n name: "order_delivery",\n },\n (g: any) => {\n const pub = Policy.public();\n\n const temporal = new TemporalRuntime({\n name: "order_delivery",\n hostSecret: "HOST",\n namespaceSecret: "NAMESPACE",\n });\n\n const workflow_id = "order-delivery-1";\n\n const order_id = t.string();\n\n g.expose(\n {\n start: temporal.startWorkflow("OrderWorkflow", order_id),\n describe: workflow_id\n ? temporal.describeWorkflow().reduce({ workflow_id })\n : temporal.describeWorkflow(),\n },\n pub,\n );\n },\n);\n'})})]}),(0,t.jsxs)(i.A,{value:"python",children:[(0,t.jsxs)(a,{children:[(0,t.jsxs)("summary",{children:["Activities definition inside ",(0,t.jsx)(n.code,{children:"activities.py"}),"."]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-python",children:'from temporalio import activity\nimport time\n\n@activity.defn\nasync def process_payment(order_id: str) -> str:\n print(f"Processing payment for order {order_id}")\n # Simulate payment processing logic\n time.sleep(5)\n return "Payment processed"\n\n@activity.defn\nasync def check_inventory(order_id: str) -> str:\n print(f"Checking inventory for order {order_id}")\n # Simulate inventory check logic\n time.sleep(4)\n return "Inventory available"\n\n@activity.defn\nasync def deliver_order(order_id: str) -> str:\n print(f"Delivering order {order_id}")\n time.sleep(8)\n # Simulate delivery logic\n return "Order delivered"\n'})})]}),(0,t.jsxs)(a,{children:[(0,t.jsxs)("summary",{children:["Worker defintion inside ",(0,t.jsx)(n.code,{children:"run_worker.py"}),"."]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-python",children:'import asyncio\n\nfrom temporalio.client import Client\nfrom temporalio.worker import Worker\n\nfrom activities import process_payment, deliver_order, check_inventory\nfrom shared import ORDER_DELIVERY_QUEUE\nfrom workflows import OrderWorkflow\n\n\nasync def main() -> None:\n client: Client = await Client.connect("localhost:7233", namespace="default")\n worker: Worker = Worker(\n client,\n task_queue=ORDER_DELIVERY_QUEUE,\n workflows=[OrderWorkflow],\n activities=[process_payment, check_inventory, deliver_order],\n )\n await worker.run()\n\n\nif __name__ == "__main__":\n asyncio.run(main())\n'})})]}),(0,t.jsxs)(n.p,{children:["After you have setup the above components, now you need a client to start of any ",(0,t.jsx)(n.code,{children:"OrderWorkflow"}),". Here is where metatype comes in, through the simple APIs ",(0,t.jsx)(n.a,{href:"/docs/reference/runtimes/temporal",children:"Temporal Runtime"})," exposes, you can communicate with your temporal cluster.\nDown below is the workflow communication bridge for this system expressed within a ",(0,t.jsx)(n.a,{href:"/docs/reference/typegraph",children:"typegraph"})," which includes endpoints to start a new workflow and describe an existing one."]}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-python",children:'from typegraph import t, typegraph, Policy, Graph\nfrom typegraph.providers.temporal import TemporalRuntime\n\n\n@typegraph()\ndef example(g: Graph):\n public = Policy.public()\n\n temporal = TemporalRuntime(\n "example", "HOST", namespace_secret="NAMESPACE"\n )\n\n workflow_id = "order-delivery-1"\n\n order_id = t.string()\n\n g.expose(\n public,\n start=temporal.start_workflow("OrderWorkflow", order_id),\n describe=temporal.describe_workflow().reduce({"workflow_id": workflow_id})\n if workflow_id\n else temporal.describe_workflow(),\n )\n'})})]})]}),"\n",(0,t.jsxs)(n.p,{children:["You need to add the secrets ",(0,t.jsx)(n.code,{children:"HOST"})," and ",(0,t.jsx)(n.code,{children:"NAMESPACE"})," under your typegraph name inside the ",(0,t.jsx)(n.code,{children:"metatype.yaml"})," file. These secrets are important to connect with your temporal cluster and can be safely stored in the config file as shown below."]}),"\n",(0,t.jsxs)(a,{children:[(0,t.jsx)("summary",{children:"metatype.yaml"}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-yaml",children:'typegate:\n dev:\n url: "http://localhost:7890"\n username: admin\n password: password\n secrets:\n example:\n POSTGRES: "postgresql://postgres:password@postgres:5432/db"\n HOST: "http://localhost:7233"\n NAMESPACE: "default"\n'})})]}),"\n",(0,t.jsxs)(n.p,{children:["You need to add only the last two lines as the others are auto-generated. Note that secrets are defined under the ",(0,t.jsx)(n.code,{children:"example"})," parent, which is the name of your typegraph. If the name doesn't match, you will face secret not found issues when deploying your typegraph."]}),"\n",(0,t.jsxs)(n.p,{children:["Before deploying the above typegraph, you need to start the temporal server and the worker. You need to have ",(0,t.jsx)(n.a,{href:"https://learn.temporal.io/getting_started/typescript/dev_environment/#set-up-a-local-temporal-service-for-development-with-temporal-cli",children:"temporal"})," installed on your machine."]}),"\n",(0,t.jsxs)(a,{children:[(0,t.jsx)("summary",{children:"Boot up temporal"}),(0,t.jsx)(n.p,{children:"Start the temporal server."}),(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-bash",children:"temporal server start-dev\n"})}),(0,t.jsx)(n.p,{children:"Start the worker."}),(0,t.jsxs)(o.Ay,{children:[(0,t.jsx)(i.A,{value:"typescript",children:(0,t.jsx)(n.p,{children:(0,t.jsx)(n.code,{children:"typescript npx ts-node src/worker.ts "})})}),(0,t.jsx)(i.A,{value:"python",children:(0,t.jsx)(n.code,{children:"python python run_worker.py "})})]})]}),"\n",(0,t.jsxs)(n.p,{children:["After booting the temporal server, run the command down below to get a locally spinning ",(0,t.jsx)(n.a,{href:"/docs/reference/typegate",children:"typegate"})," instance with your typegraph deployed."]}),"\n",(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-bash",children:"meta dev\n"})}),"\n",(0,t.jsxs)(n.p,{children:["After completing the above steps, you can access the web GraphQL client of the typegate at ",(0,t.jsx)(n.a,{href:"http://localhost:7890/example",children:(0,t.jsx)(n.code,{children:"http://localhost:7890/example"})}),". Run this query inside the client to start your workflow."]}),"\n",(0,t.jsx)(n.pre,{children:(0,t.jsx)(n.code,{className:"language-graphql",children:'mutation {\n start(\n workflow_id: "order-delivery-3"\n task_queue: "order-delivery-queue"\n args: ["order12"]\n )\n}\n'})}),"\n",(0,t.jsxs)(n.p,{children:["After a successful run, you will get the following result which includes the ",(0,t.jsx)(n.code,{children:"run_id"})," of the workflow which has just been started."]}),"\n",(0,t.jsx)("img",{src:"/images/blog/execution-flow-paradigms/start-workflow-result.png",alt:"Query result"}),"\n",(0,t.jsx)(n.p,{children:"You can also check the temporal web UI to monitor your workflows and you should see a result similar to this one."}),"\n",(0,t.jsx)("img",{src:"/images/blog/execution-flow-paradigms/temporal-web-ui.png",alt:"Workflows dashboard"}),"\n",(0,t.jsxs)(n.p,{children:["You can explore the ",(0,t.jsx)(n.a,{href:"/docs/reference/runtimes/temporal",children:"Temporal Runtime"})," for more info."]}),"\n",(0,t.jsx)(n.p,{children:"This wraps up the blog, thanks for reading until the end :)"})]})}function u(e={}){const{wrapper:n}={...(0,s.R)(),...e.components};return n?(0,t.jsx)(n,{...e,children:(0,t.jsx)(p,{...e})}):p(e)}},65480:(e,n,r)=>{r.d(n,{Ay:()=>o,gc:()=>a});r(30758);var t=r(3733),s=r(56315),i=r(86070);function o(e){let{children:n}=e;const[r,o]=(0,t.e)();return(0,i.jsx)(s.mS,{choices:{typescript:"Typescript SDK",python:"Python SDK"},choice:r,onChange:o,children:n})}function a(e){let{children:n}=e;const[r]=(0,t.e)();return(0,i.jsx)(s.q9,{choices:{typescript:"Typescript SDK",python:"Python SDK"},choice:r,children:n})}},16676:(e,n,r)=>{r.d(n,{A:()=>t});const t=r.p+"assets/images/eda.drawio-9d730aef7e9f00ffed737626d602be5c.svg"},35936:(e,n,r)=>{r.d(n,{A:()=>t});const t=r.p+"assets/images/saga.drawio-6f492c8332ead1021dde63fa7daf0efd.svg"}}]);
\ No newline at end of file
diff --git a/assets/js/95b96bb9.0ec46023.js b/assets/js/95b96bb9.0ec46023.js
deleted file mode 100644
index 62872cda25..0000000000
--- a/assets/js/95b96bb9.0ec46023.js
+++ /dev/null
@@ -1 +0,0 @@
-"use strict";(self.webpackChunkwebsite=self.webpackChunkwebsite||[]).push([[6405],{7057:e=>{e.exports=JSON.parse('{"title":"Recent posts","items":[{"title":"Distributed execution flow paradigms","permalink":"/blog/2024/08/27/distributed-execution-flow-paradigms","unlisted":false},{"title":"Python on WebAssembly: How?","permalink":"/blog/2024/08/26/python-on-webassembly","unlisted":false},{"title":"Programmatic deployment (v0.4.x)","permalink":"/blog/2024/05/09/programmatic-deployment","unlisted":false},{"title":"The Node/Deno SDK is now available","permalink":"/blog/2023/11/27/node-compatibility","unlisted":false},{"title":"Programmable glue for developers","permalink":"/blog/2023/06/18/programmable-glue","unlisted":false}]}')}}]);
\ No newline at end of file
diff --git a/assets/js/95b96bb9.3dbec1e6.js b/assets/js/95b96bb9.3dbec1e6.js
new file mode 100644
index 0000000000..25b597c777
--- /dev/null
+++ b/assets/js/95b96bb9.3dbec1e6.js
@@ -0,0 +1 @@
+"use strict";(self.webpackChunkwebsite=self.webpackChunkwebsite||[]).push([[6405],{7057:e=>{e.exports=JSON.parse('{"title":"Recent posts","items":[{"title":"Introducing gRPC Runtime","permalink":"/blog/2024/09/26/introducing-grpc-runtime","unlisted":false},{"title":"Distributed execution flow paradigms","permalink":"/blog/2024/08/27/distributed-execution-flow-paradigms","unlisted":false},{"title":"Python on WebAssembly: How?","permalink":"/blog/2024/08/26/python-on-webassembly","unlisted":false},{"title":"Programmatic deployment (v0.4.x)","permalink":"/blog/2024/05/09/programmatic-deployment","unlisted":false},{"title":"The Node/Deno SDK is now available","permalink":"/blog/2023/11/27/node-compatibility","unlisted":false}]}')}}]);
\ No newline at end of file
diff --git a/assets/js/97787cbd.9066a603.js b/assets/js/97787cbd.85bdc752.js
similarity index 75%
rename from assets/js/97787cbd.9066a603.js
rename to assets/js/97787cbd.85bdc752.js
index 03f21dd957..b5c7f255e0 100644
--- a/assets/js/97787cbd.9066a603.js
+++ b/assets/js/97787cbd.85bdc752.js
@@ -1 +1 @@
-"use strict";(self.webpackChunkwebsite=self.webpackChunkwebsite||[]).push([[1922],{63961:e=>{e.exports=JSON.parse('{"metadata":{"permalink":"/blog","page":1,"postsPerPage":10,"totalPages":1,"totalCount":6,"blogDescription":"Blog","blogTitle":"Blog"}}')}}]);
\ No newline at end of file
+"use strict";(self.webpackChunkwebsite=self.webpackChunkwebsite||[]).push([[1922],{63961:e=>{e.exports=JSON.parse('{"metadata":{"permalink":"/blog","page":1,"postsPerPage":10,"totalPages":1,"totalCount":7,"blogDescription":"Blog","blogTitle":"Blog"}}')}}]);
\ No newline at end of file
diff --git a/assets/js/b3219b4c.29471a66.js b/assets/js/b3219b4c.29471a66.js
deleted file mode 100644
index 0e78cd06d0..0000000000
--- a/assets/js/b3219b4c.29471a66.js
+++ /dev/null
@@ -1 +0,0 @@
-"use strict";(self.webpackChunkwebsite=self.webpackChunkwebsite||[]).push([[3099],{83890:e=>{e.exports=JSON.parse('{"archive":{"blogPosts":[{"id":"/2024/08/27/distributed-execution-flow-paradigms","metadata":{"permalink":"/blog/2024/08/27/distributed-execution-flow-paradigms","editUrl":"https://github.com/metatypedev/metatype/tree/main/docs/metatype.dev/blog/2024-08-27-distributed-execution-flow-paradigms/index.mdx","source":"@site/blog/2024-08-27-distributed-execution-flow-paradigms/index.mdx","title":"Distributed execution flow paradigms","description":"In this age of cloud development and microservices architecture, problems start to arise with the increased workloads that run in the system. Imagine an e-commerce platform where a customer places an order for a product during a high-demand sale event. The order triggers a series of interconnected processes: payment processing, inventory checks, packaging, shipping, and final delivery. Each of these processes might be handled by different microservices, potentially running on different servers or even in different data centers. What happens if the payment service goes down right after the payment is authorized but before the inventory is updated? Or if the packaging service fails just after the inventory is deducted but before the item is packed? Without a robust mechanism to ensure that each step in the workflow completes successfully and that failures are properly handled, you could end up with unhappy customers, lost orders, and inventory discrepancies.","date":"2024-08-27T00:00:00.000Z","tags":[],"readingTime":10.92,"hasTruncateMarker":false,"authors":[],"frontMatter":{},"unlisted":false,"nextItem":{"title":"Python on WebAssembly: How?","permalink":"/blog/2024/08/26/python-on-webassembly"}},"content":"import TabItem from \\"@theme/TabItem\\";\\nimport SDKTabs from \\"@site/src/components/SDKTabs\\";\\n\\n\\nIn this age of cloud development and microservices architecture, problems start to arise with the increased workloads that run in the system. Imagine an e-commerce platform where a customer places an order for a product during a high-demand sale event. The order triggers a series of interconnected processes: payment processing, inventory checks, packaging, shipping, and final delivery. Each of these processes might be handled by different microservices, potentially running on different servers or even in different data centers. What happens if the payment service goes down right after the payment is authorized but before the inventory is updated? Or if the packaging service fails just after the inventory is deducted but before the item is packed? Without a robust mechanism to ensure that each step in the workflow completes successfully and that failures are properly handled, you could end up with unhappy customers, lost orders, and inventory discrepancies.\\n\\nHaving multiple components in your system introduces more failure points, which is a common phenomenon in complex systems. But one important behavior any application must ensure is that the execution flow reaches its completion. As systems grow in features and complexity, the likelihood of long-running processes increases. To ensure these processes complete as intended, several solutions have been introduced over the last few decades.\\nLet\'s explore some of the solutions that have been proposed to achieve workflow completeness.\\n\\n### 1. Event-Driven Architecture with Message Queues\\n\\nThis architecture relies heavily on services communicating by publishing and subscribing to events using message queues. Message queues are persistent storages that ensure data is not lost during failures or service unavailability. Components in a distributed system synchronize by using events/messages through these independent services. While this approach offers service decomposability and fault tolerance, it has some shortcomings. For example, using message queues comes with the overhead of managing messages (e.g., deduplication and message ordering). It also isn\u2019t ideal for systems requiring immediate consistency across components. Some technologies and patterns that utilize this architecture include:\\n\\n- [RabbitMQ](https://www.rabbitmq.com/)\\n- [Amazon SQS](https://aws.amazon.com/sqs/)\\n\\n![](eda.drawio.svg)\\n\\n
\\n Fig. Event Driven Architecture with Message Queues - RabbitMQ\\n
\\n\\n#### Advantages\\n\\n- Improved Scalability\\n- Enhanced Responsiveness\\n- Enhanced Fault Tolerance\\n- Simplified Complex Workflows\\n- Real-Time Data Processing\\n\\n#### Challenges\\n\\n- Event Ordering\\n- Data Consistency\\n- Monitoring and Debugging\\n- Event Deduplication\\n\\nYou can mitigate or reduce these challenges by following best practices like Event Sourcing, Idempotent Processing, CQRS (Command Query Responsibility Segregation), and Event Versioning.\\n\\n### 2. The [Saga Pattern](https://microservices.io/patterns/data/saga.html)\\n\\nThis design pattern aims to achieve consistency across different services in a distributed system by breaking complex transactions spanning multiple components into a series of local transactions. Each of these transactions triggers an event or message that starts the next transaction in the sequence. If any local transaction fails to complete, a series of compensating actions roll back the effects of preceding transactions. While the orchestration of local transactions can vary, the pattern aims to achieve consistency in a microservices-based system. Events are designed to be stored in durable storage systems or logs, providing a trail to reconstruct the system to a state after a failure. While the saga pattern is an effective way to ensure consistency, it can be challenging to implement timer/timeout-based workflows and to design and implement the compensating actions for local transactions.\\n\\n**Note**: In the Saga pattern, a compensating transaction must be idempotent and retryable. These principles ensure that transactions can be managed without manual intervention.\\n\\n![](saga.drawio.svg)\\n\\n
\\n Fig. The Saga Pattern for Order delivery system\\n
\\n\\n#### Advantages\\n\\n- Ensures data consistency in a distributed system without tight coupling.\\n- Provides Roll back if one of the operations in the sequence fails.\\n\\n#### Drawbacks\\n\\n- Might be challenging to implement initially.\\n- Hard to debug.\\n- Compensating transactions don\u2019t always work.\\n\\n### 3. [Stateful Orchestrators](https://docs.oracle.com/en/applications/jd-edwards/cross-product/9.2/eotos/creating-a-stateful-orchestration-release-9-2-8-3.html#u30249073)\\n\\nStateful orchestrators provide a solution for long-running workflows by maintaining the state of each step in a workflow. Each step in a workflow represents a task, and these tasks are represented as states inside workflows. Workflows are defined as state machines or directed acyclic graphs (DAGs). In this approach, an orchestrator handles task execution order, transitioning, handling retries, and maintaining state. In the event of a failure, the system can recover from the persisted state. Stateful orchestrators offer significant value in fault tolerance, consistency, and observability. It\u2019s one of the solutions proven effective in modern distributed computing. Some well-known services that provide this solution include:\\n\\n- [Apache Airflow](https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/overview.html)\\n- [Azure Logic Apps](https://azure.microsoft.com/en-us/products/logic-apps)\\n\\n#### Advantages\\n\\n- **High Resiliency**: Stateful orchestrators provide high resiliency in case of outages, ensuring that workflows can continue from where they left off.\\n- **Data Persistence**: They allow you to keep, review, or reference data from previous events, which is useful for long-running processes.\\n- **Extended Runtime**: Stateful workflows can continue running for much longer than stateless workflows, making them suitable for complex and long-running tasks.\\n\\n#### Challenges\\n\\n- **Additional Complexity**: They introduce additional complexity, requiring you to manage issues such as load balancing, CPU and memory usage, and networking.\\n- **Cost**: With stateful workflows, you pay for the VMs that are running in the cluster, whereas with stateless workflows, you pay only for the actual compute resources consumed.\\n\\n### 4. Durable Execution\\n\\nDurable execution refers to the ability of a system to preserve the state of an application and persist execution despite failures or interruptions. Durable execution ensures that for every task, its inputs, outputs, call stack, and local variables are persisted. These constraints, or rather features, allow a system to automatically retry or continue running in the face of infrastructure or system failures, ultimately ensuring completion.\\n\\nDurable execution isn\u2019t a completely distinct solution from the ones listed above but rather incorporates some of their strengths while presenting a more comprehensive approach to achieving consistency, fault tolerance, data integrity, resilience for long-running processes, and observability.\\n\\n\\n
Fig. Durable workflow engine
\\n\\n#### Advantages\\n\\n- **Reduced Manual Intervention**: Minimizes the need for human intervention by handling retries and failures programmatically.\\n- **Improved Observability**: Provides a clear audit trail and visibility into the state of workflows, which aids in debugging and monitoring.\\n- **Scalability**: Scales efficiently across distributed systems while maintaining workflow integrity.\\n\\n#### Challenges\\n\\n- **Resource Intensive**: Persistent state storage and management can consume significant resources, especially in large-scale systems.\\n- **Latency**: The need to persist state and handle retries can introduce latency in the execution flow.\\n\\nAs durable execution grows to be a fundamental driver of distributed computing, some of the solutions which use this architecture are\\n\\n- [Temporal](https://temporal.io/)\\n- [Uber Cadence](https://cadenceworkflow.io/)\\n\\nAmong these, [Temporal](https://temporal.io/) has grown in influence, used by companies like SnapChat, HashiCorp, Stripe, DoorDash, and DataDog. Its success is driven by its practical application in real-world scenarios and the expertise of its founders.\\n\\nAt Metatype, we recognize the value of durable execution and are committed to making it accessible. Our [Temporal Runtime](/docs/reference/runtimes/temporal) integrates seamlessly into our declarative API development platform, enabling users to harness the power of Temporal directly within Metatype. For those interested in exploring further, our documentation provides a detailed guide on getting started with [Temporal Runtime](/docs/reference/runtimes/temporal).\\n\\nBelow is an example of how you can build a simple API to interact with an order delivery temporal workflow within Metatype.\\n\\n:::note\\nIf you are new to Metatype or haven\u2019t set it up yet in your development environment. You can follow this [guideline](/docs/tutorials/quick-start).\\n:::\\n\\nFor this example, the order delivery system will have few components/services such as Payment, Inventory and Delivery.\\n\\nYour temporal workflow definition should look similar to the one below.\\n\\n\\n\\n \\n\\n\\n\\nActivities definition inside `src/activities.ts`:`\\n\\n```typescript\\nasync function sleep(time: number) {\\n return new Promise((resolve) => {\\n setTimeout(resolve, time);\\n });\\n}\\n\\nexport async function processPayment(orderId: string): Promise {\\n console.log(`Processing payment for order ${orderId}`);\\n // Simulate payment processing logic\\n await sleep(2);\\n return \\"Payment processed\\";\\n}\\n\\nexport async function checkInventory(orderId: string): Promise {\\n console.log(`Checking inventory for order ${orderId}`);\\n // Simulate inventory check logic\\n await sleep(2);\\n return \\"Inventory available\\";\\n}\\n\\nexport async function deliverOrder(orderId: string): Promise {\\n console.log(`Delivering order ${orderId}`);\\n // Simulate delivery logic\\n await sleep(5);\\n return \\"Order delivered\\";\\n}\\n```\\n\\n\\n\\n\\n\\nWorkflow definition inside `src/workflows.ts`:\\n```typescript import {proxyActivities} from \\"@temporalio/workflow\\";\\n\\nexport const { processPayment, checkInventory, deliverOrder } =\\n proxyActivities<{\\n processPayment(orderId: string): Promise;\\n checkInventory(orderId: string): Promise;\\n deliverOrder(orderId: string): Promise;\\n }>({\\n startToCloseTimeout: \\"10 seconds\\",\\n });\\n\\nexport async function OrderWorkflow(orderId: string): Promise {\\n const paymentResult = await processPayment(orderId);\\n const inventoryResult = await checkInventory(orderId);\\n const deliveryResult = await deliverOrder(orderId);\\n return `Order ${orderId} completed with results: ${paymentResult}, ${inventoryResult}, ${deliveryResult}`;\\n}\\n```\\n\\n\\n\\nWorker definintion inside `src/worker.ts`:\\n\\n```typescript\\nimport { NativeConnection, Worker } from \\"@temporalio/worker\\";\\nimport * as activities from \\"./activities\\";\\nimport { TASK_QUEUE_NAME } from \\"./shared\\";\\n\\nasync function run() {\\n const connection = await NativeConnection.connect({\\n address: \\"localhost:7233\\",\\n });\\n\\n const worker = await Worker.create({\\n connection,\\n namespace: \\"default\\",\\n taskQueue: TASK_QUEUE_NAME,\\n workflowsPath: require.resolve(\\"./workflows\\"),\\n activities,\\n });\\n\\n await worker.run();\\n}\\n\\nrun().catch((err) => {\\n console.error(err);\\n process.exit(1);\\n});\\n```\\n\\n\\n\\nAfter you have setup the above components, now you need a client to start of any `OrderWorkflow`. Here is where metatype comes in, through the simple APIs [Temporal Runtime](/docs/reference/runtimes/temporal) exposes, you can communicate with your temporal cluster.\\nDown below is the workflow communication bridge for this system expressed within a [typegraph](/docs/reference/typegraph) which includes endpoints to start a new workflow and describe an existing one.\\n\\n```typescript\\nimport { Policy, t, typegraph } from \\"@typegraph/sdk/index.ts\\";\\nimport { TemporalRuntime } from \\"@typegraph/sdk/providers/temporal.ts\\";\\n\\ntypegraph(\\n {\\n name: \\"order_delivery\\",\\n },\\n (g: any) => {\\n const pub = Policy.public();\\n\\n const temporal = new TemporalRuntime({\\n name: \\"order_delivery\\",\\n hostSecret: \\"HOST\\",\\n namespaceSecret: \\"NAMESPACE\\",\\n });\\n\\n const workflow_id = \\"order-delivery-1\\";\\n\\n const order_id = t.string();\\n\\n g.expose(\\n {\\n start: temporal.startWorkflow(\\"OrderWorkflow\\", order_id),\\n describe: workflow_id\\n ? temporal.describeWorkflow().reduce({ workflow_id })\\n : temporal.describeWorkflow(),\\n },\\n pub,\\n );\\n },\\n);\\n```\\n\\n \\n\\n {/* break */}\\n \\n\\n\\nActivities definition inside `activities.py`.\\n\\n```python\\nfrom temporalio import activity\\nimport time\\n\\n@activity.defn\\nasync def process_payment(order_id: str) -> str:\\n print(f\\"Processing payment for order {order_id}\\")\\n # Simulate payment processing logic\\n time.sleep(5)\\n return \\"Payment processed\\"\\n\\n@activity.defn\\nasync def check_inventory(order_id: str) -> str:\\n print(f\\"Checking inventory for order {order_id}\\")\\n # Simulate inventory check logic\\n time.sleep(4)\\n return \\"Inventory available\\"\\n\\n@activity.defn\\nasync def deliver_order(order_id: str) -> str:\\n print(f\\"Delivering order {order_id}\\")\\n time.sleep(8)\\n # Simulate delivery logic\\n return \\"Order delivered\\"\\n```\\n\\n\\n\\n\\nWorker defintion inside `run_worker.py`.\\n\\n```python\\nimport asyncio\\n\\nfrom temporalio.client import Client\\nfrom temporalio.worker import Worker\\n\\nfrom activities import process_payment, deliver_order, check_inventory\\nfrom shared import ORDER_DELIVERY_QUEUE\\nfrom workflows import OrderWorkflow\\n\\n\\nasync def main() -> None:\\n client: Client = await Client.connect(\\"localhost:7233\\", namespace=\\"default\\")\\n worker: Worker = Worker(\\n client,\\n task_queue=ORDER_DELIVERY_QUEUE,\\n workflows=[OrderWorkflow],\\n activities=[process_payment, check_inventory, deliver_order],\\n )\\n await worker.run()\\n\\n\\nif __name__ == \\"__main__\\":\\n asyncio.run(main())\\n```\\n\\n\\n\\nAfter you have setup the above components, now you need a client to start of any `OrderWorkflow`. Here is where metatype comes in, through the simple APIs [Temporal Runtime](/docs/reference/runtimes/temporal) exposes, you can communicate with your temporal cluster.\\nDown below is the workflow communication bridge for this system expressed within a [typegraph](/docs/reference/typegraph) which includes endpoints to start a new workflow and describe an existing one.\\n\\n```python\\nfrom typegraph import t, typegraph, Policy, Graph\\nfrom typegraph.providers.temporal import TemporalRuntime\\n\\n\\n@typegraph()\\ndef example(g: Graph):\\n public = Policy.public()\\n\\n temporal = TemporalRuntime(\\n \\"example\\", \\"HOST\\", namespace_secret=\\"NAMESPACE\\"\\n )\\n\\n workflow_id = \\"order-delivery-1\\"\\n\\n order_id = t.string()\\n\\n g.expose(\\n public,\\n start=temporal.start_workflow(\\"OrderWorkflow\\", order_id),\\n describe=temporal.describe_workflow().reduce({\\"workflow_id\\": workflow_id})\\n if workflow_id\\n else temporal.describe_workflow(),\\n )\\n```\\n\\n \\n\\n\\n\\nYou need to add the secrets `HOST` and `NAMESPACE` under your typegraph name inside the `metatype.yaml` file. These secrets are important to connect with your temporal cluster and can be safely stored in the config file as shown below.\\n\\n\\nmetatype.yaml\\n\\n```yaml\\ntypegate:\\n dev:\\n url: \\"http://localhost:7890\\"\\n username: admin\\n password: password\\n secrets:\\n example:\\n POSTGRES: \\"postgresql://postgres:password@postgres:5432/db\\"\\n HOST: \\"http://localhost:7233\\"\\n NAMESPACE: \\"default\\"\\n```\\n\\n\\n\\nYou need to add only the last two lines as the others are auto-generated. Note that secrets are defined under the `example` parent, which is the name of your typegraph. If the name doesn\'t match, you will face secret not found issues when deploying your typegraph.\\n\\nBefore deploying the above typegraph, you need to start the temporal server and the worker. You need to have [temporal](https://learn.temporal.io/getting_started/typescript/dev_environment/#set-up-a-local-temporal-service-for-development-with-temporal-cli) installed on your machine.\\n\\n\\nBoot up temporal\\n\\nStart the temporal server.\\n\\n```bash\\ntemporal server start-dev\\n```\\n\\nStart the worker.\\n\\n\\n\\n\\n ```typescript npx ts-node src/worker.ts ```\\n\\n\\n```python python run_worker.py ```\\n\\n\\n\\n\\nAfter booting the temporal server, run the command down below to get a locally spinning [typegate](/docs/reference/typegate) instance with your typegraph deployed.\\n\\n```bash\\nmeta dev\\n```\\n\\nAfter completing the above steps, you can access the web GraphQL client of the typegate at [`http://localhost:7890/example`](http://localhost:7890/example). Run this query inside the client to start your workflow.\\n\\n```graphql\\nmutation {\\n start(\\n workflow_id: \\"order-delivery-3\\"\\n task_queue: \\"order-delivery-queue\\"\\n args: [\\"order12\\"]\\n )\\n}\\n```\\n\\nAfter a successful run, you will get the following result which includes the `run_id` of the workflow which has just been started.\\n\\n\\n\\nYou can also check the temporal web UI to monitor your workflows and you should see a result similar to this one.\\n\\n\\n\\nYou can explore the [Temporal Runtime](/docs/reference/runtimes/temporal) for more info.\\n\\nThis wraps up the blog, thanks for reading until the end :)"},{"id":"/2024/08/26/python-on-webassembly","metadata":{"permalink":"/blog/2024/08/26/python-on-webassembly","editUrl":"https://github.com/metatypedev/metatype/tree/main/docs/metatype.dev/blog/2024-08-26-python-on-webassembly/index.mdx","source":"@site/blog/2024-08-26-python-on-webassembly/index.mdx","title":"Python on WebAssembly: How?","description":"Metatype\'s different language runtimes are nice, but integrating one is an entire story. Let\'s discover how we managed to implement one for Python.","date":"2024-08-26T00:00:00.000Z","tags":[],"readingTime":11.125,"hasTruncateMarker":false,"authors":[],"frontMatter":{},"unlisted":false,"prevItem":{"title":"Distributed execution flow paradigms","permalink":"/blog/2024/08/27/distributed-execution-flow-paradigms"},"nextItem":{"title":"Programmatic deployment (v0.4.x)","permalink":"/blog/2024/05/09/programmatic-deployment"}},"content":"Metatype\'s different language runtimes are nice, but integrating one is an entire story. Let\'s discover how we managed to implement one for Python.\\n\\n## Why?\\n\\nYou have probably heard of \\"Function as a Service\\" or FaaS. \\nIn simple terms, FaaS are platforms that allow users to run code in response to events without the hassle of managing the underlying infrastructure. \\nUsers submit their programs and the platform takes care of the rest including, usually, scaling, availability, and configuration.\\nAWS Lambda is one such example and FaaS as a whole are a popular implementation of the serverless model.\\n\\nMetatype has this model at heart with applications composed of small functions that respond to events like http requests and authorization checks. \\nThis is achieved through runtimes like the [`DenoRuntime`](/docs/reference/runtimes/deno) which implements a way to execute functions authored in Typescript using Web Workers as implemented by [Deno](https://docs.deno.com/runtime/manual/runtime/workers/) (not based on Deno Deploy). \\n\\n:::note\\nMetatype supports running multiple apps or typegraphs on a single deployed cluster but we\'re still in the kitchen on a hosted cloud solution. \\nSubscribe to the [blog](https://metatype.dev/blog/rss.xml) or the [Github](https://github.com/metatypedev/metatype) repository for updates.\\n:::\\n\\nImplementing the `DenoRuntime` was a very straightforward affair as the Typegate (the engine at the heart of Metatype) is primarily written in Typescript and runs on a slightly modified version of the Deno runtime.\\nWhat\'s more, JavaScript has single threaded and asynchronous semantics and the v8 engine that it commonly runs on is of very high-quality by all regards. \\nThese qualities lend themselves very well to the requirements of running a serverless platform like security (good sandboxing) and performance (low start-up latencies).\\nThis fact is reflected in the dominance of JavaScript in the serverless market though it doesn\'t hurt that it\'s also the most popular language in use today.\\n\\nAnother very popular language is Python; and its standard library can be quite resourceful for this type of use case.\\nHowever, as we shall see, integrating the Python runtime isn\'t as simple as integrating Deno.\\n\\n## What are the requirements?\\n\\nThere are a number of Python runtimes available but a set of extra factors limit what we can achieve.\\n\\n1. **Security**: functions should have limited access to the execution environment. Python doesn\'t have built-in features for sandboxing out of the box unlike Deno.\\n2. **Speed**: functions should run fast and with low latency. We\'re interested in metrics like cold-start latency and overhead of any cross process/system communication.\\n3. **User-friendliness**: functionalities provided in any of the languages supported by Metatype should, within reason, mirror each other and maintain a degree of uniformity. We support inline code snippets and external file references for `DenoRuntime` and this should be the case for Python as well.\\n4. **Interoperability**: functions running in Python will need to have access to other parts of the app running on the Typegate like being able to invoke other functions.\\n\\nThe Typegate is a TypeScript program with a bit of Rust sprinkled in. \\nIt runs as a traditional POSIX process. \\nThink Linux containers. \\nThis fact renders multi-processing, one of the readily apparent approaches, undesirable as it would require investing is robust worker process management and distribution schemes.\\nIt\'d be great if we could keep everything inside the Typegate process.\\n\\nOne solution that presents itself here is the [PyO3](https://pyo3.rs/) project which provide Rust bindings to different Python runtimes like CPython and PyPy.\\nIt\'d not only allow us to run Python code in-process but it also provide an easy way to expose the functions written in Rust to Python and vice-versa. \\nA good solution for the bidirectional communication needed for our interoperability requirements.\\n\\nUnfortunately, PyO3 doesn\'t have any provisions for sandboxing which is critical for our use case.\\nThis is where WebAssembly enters into the picture. \\nWebAssembly or Wasm for short is a executable bytecode format that originates from the web world and is designed for applications that run inside web-browsers. \\nThis use case shares most of our requirements and the Wasm world promises excellent sandboxing properties that should be perfect for our use case.\\nWe just have to find a way to run Python inside of it.\\n\\n## An aside on WASI \\n\\nWebAssembly System Interface (WASI) is an additional spec for the bytecode format that formalizes how Wasm programs access their host environment.\\nA lot like POSIX, this generally means OS capabilities such as file system access and networking but also, in it\'s latest iteration extends to any custom host defined functionality.\\n\\nWasm + WASI fits very well to our use case. As opposed to mutli-processing, we can instantiate, manage, and expose resources programmatically with ease.\\nAnd as luck would have it, some [community work](https://github.com/vmware-labs/webassembly-language-runtimes) has already been done at the time that led to wasm builds of CPython being available.\\n\\nUnfortunately, the WASI spec itself is a work in progress.\\nWhen we started out, only the limited \\"[preview1](https://github.com/WebAssembly/WASI/blob/main/legacy/preview1/docs.md)\\" implementation was supported by most runtimes.\\n`preview1` only focused on a standard set of host functionalities much like a `libc` implementation.\\nWell enough but any custom functionality meant having to rely on simple C ABI alike functions for _intra_-process communication.\\nIn order to make this work easier, we elected to bring PyO3 back into the picture so that all the IPC stuff is written in Rust, the language with the most support in the Wasm ecosystem today.\\n\\nAll in all, this would mean the python interpreter wrapped in a PyO3 based native API.\\nAn assembly that accepts user code as strings and then invokes them in response to events.\\nAll of this would be running inside a Wasm runtime, [WasmEdge](https://wasmedge.org/) in this case, which ticks of all of the sandboxing and security requirements.\\nThis approach is well described as the [Reactor pattern](https://wasmcloud.com/blog/webassembly-patterns-command-reactor-library#the-reactor-pattern), a common pattern used in Wasm land.\\n\\n\\n\\n### File system access\\n\\nSince the PyO3 project doesn\'t support [statically linking](https://github.com/PyO3/pyo3/issues/416) the Python runtime, we\'ll need to find a way dynamically link `libpython`.\\nThankfully, Wasm does support [dynamic linking](https://github.com/WebAssembly/design/blob/main/DynamicLinking.md) and wasm builds of [`libpython`](https://github.com/vmware-labs/webassembly-language-runtimes/tree/main/python) are available curtsy of the WebAssembly Language Runtimes project. \\nBringing all of this together isn\'t as simple though as PyO3\'s tries to load `libpython` from certain _paths_, a concept that isn\'t exactly clearly defined in Wasm\'s post POSIX webtopia.\\n\\nOur first solution was to use [wasi-vfs](https://github.com/kateinoigakukun/wasi-vfs), a tool which allows you to embed a virtual file system, accessible through preview1 APIs, directly into your wasm binaries.\\nThis way, we could prepare a single wasm artifact that contains both the `libpython` build and the custom glue code.\\n\\nThis approach turned out to be quite hacky though and after encountering several issues, we ultimately decided to go with **preopens**.\\nPreopens are another virtual file-system solution where you map an actual file-system directory to a virtual directory visible to a running Wasm instance.\\nThis means we\'ll need to prepare the `libpython` Wasm file on disk before running the instance but it was an acceptable solution.\\nWe also use preopens to provide some of the user submitted code to our custom python runtime.\\n\\nThe following rust snippet demonstrates the preopens looked like in use:\\n\\n```rust\\nfn init_Python_vm() -> Result