Example HTTP/2-only WSGI Server¶
This example is a more complex HTTP/2 server that acts as a WSGI server, passing data to an arbitrary WSGI application. This example is written using asyncio. The server supports most of PEP-3333, and so could in principle be used as a production WSGI server: however, that’s not recommended as certain shortcuts have been taken to ensure ease of implementation and understanding.
The main advantages of this example are:
It properly demonstrates HTTP/2 flow control management.
It demonstrates how to plug h2 into a larger, more complex application.
1# -*- coding: utf-8 -*-
2"""
3asyncio-server.py
4~~~~~~~~~~~~~~~~~
5
6A fully-functional WSGI server, written using h2. Requires asyncio.
7
8To test it, try installing httpbin from pip (``pip install httpbin``) and then
9running the server (``python asyncio-server.py httpbin:app``).
10
11This server does not support HTTP/1.1: it is a HTTP/2-only WSGI server. The
12purpose of this code is to demonstrate how to integrate h2 into a more
13complex application, and to demonstrate several principles of concurrent
14programming.
15
16The architecture looks like this:
17
18+---------------------------------+
19| 1x HTTP/2 Server Thread |
20| (running asyncio) |
21+---------------------------------+
22+---------------------------------+
23| N WSGI Application Threads |
24| (no asyncio) |
25+---------------------------------+
26
27Essentially, we spin up an asyncio-based event loop in the main thread. This
28launches one HTTP/2 Protocol instance for each inbound connection, all of which
29will read and write data from within the main thread in an asynchronous manner.
30
31When each HTTP request comes in, the server will build the WSGI environment
32dictionary and create a ``Stream`` object. This object will hold the relevant
33state for the request/response pair and will act as the WSGI side of the logic.
34That object will then be passed to a background thread pool, and when a worker
35is available the WSGI logic will begin to be executed. This model ensures that
36the asyncio web server itself is never blocked by the WSGI application.
37
38The WSGI application and the HTTP/2 server communicate via an asyncio queue,
39together with locks and threading events. The locks themselves are implicit in
40asyncio's "call_soon_threadsafe", which allows for a background thread to
41register an action with the main asyncio thread. When the asyncio thread
42eventually takes the action in question it sets as threading event, signaling
43to the background thread that it is free to continue its work.
44
45To make the WSGI application work with flow control, there is a very important
46invariant that must be observed. Any WSGI action that would cause data to be
47emitted to the network MUST be accompanied by a threading Event that is not
48set until that data has been written to the transport. This ensures that the
49WSGI application *blocks* until the data is actually sent. The reason we
50require this invariant is that the HTTP/2 server may choose to re-order some
51data chunks for flow control reasons: that is, the application for stream X may
52have actually written its data first, but the server may elect to send the data
53for stream Y first. This means that it's vital that there not be *two* writes
54for stream X active at any one point or they may get reordered, which would be
55particularly terrible.
56
57Thus, the server must cooperate to ensure that each threading event only fires
58when the *complete* data for that event has been written to the asyncio
59transport. Any earlier will cause untold craziness.
60"""
61import asyncio
62import importlib
63import queue
64import ssl
65import sys
66import threading
67
68from h2.config import H2Configuration
69from h2.connection import H2Connection
70from h2.events import (
71 DataReceived, RequestReceived, WindowUpdated, StreamEnded, StreamReset
72)
73
74
75# Used to signal that a request has completed.
76#
77# This is a convenient way to do "in-band" signaling of stream completion
78# without doing anything so heavyweight as using a class. Essentially, we can
79# test identity against this empty object. In fact, this is so convenient that
80# we use this object for all streams, for data in both directions: in and out.
81END_DATA_SENTINEL = object()
82
83# The WSGI callable. Stored here so that the protocol instances can get hold
84# of the data.
85APPLICATION = None
86
87
88class H2Protocol(asyncio.Protocol):
89 def __init__(self):
90 config = H2Configuration(client_side=False, header_encoding='utf-8')
91
92 # Our server-side state machine.
93 self.conn = H2Connection(config=config)
94
95 # The backing transport.
96 self.transport = None
97
98 # A dictionary of ``Stream`` objects, keyed by their stream ID. This
99 # makes it easy to route data to the correct WSGI application instance.
100 self.streams = {}
101
102 # A queue of data emitted by WSGI applications that has not yet been
103 # sent. Each stream may only have one chunk of data in either this
104 # queue or the flow_controlled_data dictionary at any one time.
105 self._stream_data = asyncio.Queue()
106
107 # Data that has been pulled off the queue that is for a stream blocked
108 # behind flow control limitations. This is used to avoid spinning on
109 # _stream_data queue when a stream cannot have its data sent. Data that
110 # cannot be sent on the connection when it is popped off the queue gets
111 # placed here until the stream flow control window opens up again.
112 self._flow_controlled_data = {}
113
114 # A reference to the loop in which this protocol runs. This is needed
115 # to synchronise up with background threads.
116 self._loop = asyncio.get_event_loop()
117
118 # Any streams that have been remotely reset. We keep track of these to
119 # ensure that we don't emit data from a WSGI application whose stream
120 # has been cancelled.
121 self._reset_streams = set()
122
123 # Keep track of the loop sending task so we can kill it when the
124 # connection goes away.
125 self._send_loop_task = None
126
127 def connection_made(self, transport):
128 """
129 The connection has been made. Here we need to save off our transport,
130 do basic HTTP/2 connection setup, and then start our data writing
131 coroutine.
132 """
133 self.transport = transport
134 self.conn.initiate_connection()
135 self.transport.write(self.conn.data_to_send())
136 self._send_loop_task = self._loop.create_task(self.sending_loop())
137
138 def connection_lost(self, exc):
139 """
140 With the end of the connection, we just want to cancel our data sending
141 coroutine.
142 """
143 self._send_loop_task.cancel()
144
145 def data_received(self, data):
146 """
147 Process inbound data.
148 """
149 events = self.conn.receive_data(data)
150
151 for event in events:
152 if isinstance(event, RequestReceived):
153 self.request_received(event)
154 elif isinstance(event, DataReceived):
155 self.data_frame_received(event)
156 elif isinstance(event, WindowUpdated):
157 self.window_opened(event)
158 elif isinstance(event, StreamEnded):
159 self.end_stream(event)
160 elif isinstance(event, StreamReset):
161 self.reset_stream(event)
162
163 outbound_data = self.conn.data_to_send()
164 if outbound_data:
165 self.transport.write(outbound_data)
166
167 def window_opened(self, event):
168 """
169 The flow control window got opened.
170
171 This is important because it's possible that we were unable to send
172 some WSGI data because the flow control window was too small. If that
173 happens, the sending_loop coroutine starts buffering data.
174
175 As the window gets opened, we need to unbuffer the data. We do that by
176 placing the data chunks back on the back of the send queue and letting
177 the sending loop take another shot at sending them.
178
179 This system only works because we require that each stream only have
180 *one* data chunk in the sending queue at any time. The threading events
181 force this invariant to remain true.
182 """
183 if event.stream_id:
184 # This is specific to a single stream.
185 if event.stream_id in self._flow_controlled_data:
186 self._stream_data.put_nowait(
187 self._flow_controlled_data.pop(event.stream_id)
188 )
189 else:
190 # This event is specific to the connection. Free up *all* the
191 # streams. This is a bit tricky, but we *must not* yield the flow
192 # of control here or it all goes wrong.
193 for data in self._flow_controlled_data.values():
194 self._stream_data.put_nowait(data)
195
196 self._flow_controlled_data.clear()
197
198 async def sending_loop(self):
199 """
200 A call that loops forever, attempting to send data. This sending loop
201 contains most of the flow-control smarts of this class: it pulls data
202 off of the asyncio queue and then attempts to send it.
203
204 The difficulties here are all around flow control. Specifically, a
205 chunk of data may be too large to send. In this case, what will happen
206 is that this coroutine will attempt to send what it can and will then
207 store the unsent data locally. When a flow control event comes in that
208 data will be freed up and placed back onto the asyncio queue, causing
209 it to pop back up into the sending logic of this coroutine.
210
211 This method explicitly *does not* handle HTTP/2 priority. That adds an
212 extra layer of complexity to what is already a fairly complex method,
213 and we'll look at how to do it another time.
214
215 This coroutine explicitly *does not end*.
216 """
217 while True:
218 stream_id, data, event = await self._stream_data.get()
219
220 # If this stream got reset, just drop the data on the floor. Note
221 # that we need to reset the event here to make sure that
222 # application doesn't lock up.
223 if stream_id in self._reset_streams:
224 event.set()
225
226 # Check if the body is done. If it is, this is really easy! Again,
227 # we *must* set the event here or the application will lock up.
228 if data is END_DATA_SENTINEL:
229 self.conn.end_stream(stream_id)
230 self.transport.write(self.conn.data_to_send())
231 event.set()
232 continue
233
234 # We need to send data, but not to exceed the flow control window.
235 # For that reason, grab only the data that fits: we'll buffer the
236 # rest.
237 window_size = self.conn.local_flow_control_window(stream_id)
238 chunk_size = min(window_size, len(data))
239 data_to_send = data[:chunk_size]
240 data_to_buffer = data[chunk_size:]
241
242 if data_to_send:
243 # There's a maximum frame size we have to respect. Because we
244 # aren't paying any attention to priority here, we can quite
245 # safely just split this string up into chunks of max frame
246 # size and blast them out.
247 #
248 # In a *real* application you'd want to consider priority here.
249 max_size = self.conn.max_outbound_frame_size
250 chunks = (
251 data_to_send[x:x+max_size]
252 for x in range(0, len(data_to_send), max_size)
253 )
254 for chunk in chunks:
255 self.conn.send_data(stream_id, chunk)
256 self.transport.write(self.conn.data_to_send())
257
258 # If there's data left to buffer, we should do that. Put it in a
259 # dictionary and *don't set the event*: the app must not generate
260 # any more data until we got rid of all of this data.
261 if data_to_buffer:
262 self._flow_controlled_data[stream_id] = (
263 stream_id, data_to_buffer, event
264 )
265 else:
266 # We sent everything. We can let the WSGI app progress.
267 event.set()
268
269 def request_received(self, event):
270 """
271 A HTTP/2 request has been received. We need to invoke the WSGI
272 application in a background thread to handle it.
273 """
274 # First, we are going to want an object to hold all the relevant state
275 # for this request/response. For that, we have a stream object. We
276 # need to store the stream object somewhere reachable for when data
277 # arrives later.
278 s = Stream(event.stream_id, self)
279 self.streams[event.stream_id] = s
280
281 # Next, we need to build the WSGI environ dictionary.
282 environ = _build_environ_dict(event.headers, s)
283
284 # Finally, we want to throw these arguments out to a threadpool and
285 # let it run.
286 self._loop.run_in_executor(
287 None,
288 s.run_in_threadpool,
289 APPLICATION,
290 environ,
291 )
292
293 def data_frame_received(self, event):
294 """
295 Data has been received by WSGI server and needs to be dispatched to a
296 running application.
297
298 Note that the flow control window is not modified here. That's
299 deliberate: see Stream.__next__ for a longer discussion of why.
300 """
301 # Grab the stream in question from our dictionary and pass it on.
302 stream = self.streams[event.stream_id]
303 stream.receive_data(event.data, event.flow_controlled_length)
304
305 def end_stream(self, event):
306 """
307 The stream data is complete.
308 """
309 stream = self.streams[event.stream_id]
310 stream.request_complete()
311
312 def reset_stream(self, event):
313 """
314 A stream got forcefully reset.
315
316 This is a tricky thing to deal with because WSGI doesn't really have a
317 good notion for it. Essentially, you have to let the application run
318 until completion, but not actually let it send any data.
319
320 We do that by discarding any data we currently have for it, and then
321 marking the stream as reset to allow us to spot when that stream is
322 trying to send data and drop that data on the floor.
323
324 We then *also* signal the WSGI application that no more data is
325 incoming, to ensure that it does not attempt to do further reads of the
326 data.
327 """
328 if event.stream_id in self._flow_controlled_data:
329 del self._flow_controlled_data[event.stream_id]
330
331 self._reset_streams.add(event.stream_id)
332 self.end_stream(event)
333
334 def data_for_stream(self, stream_id, data):
335 """
336 Thread-safe method called from outside the main asyncio thread in order
337 to send data on behalf of a WSGI application.
338
339 Places data being written by a stream on an asyncio queue. Returns a
340 threading event that will fire when that data is sent.
341 """
342 event = threading.Event()
343 self._loop.call_soon_threadsafe(
344 self._stream_data.put_nowait,
345 (stream_id, data, event)
346 )
347 return event
348
349 def send_response(self, stream_id, headers):
350 """
351 Thread-safe method called from outside the main asyncio thread in order
352 to send the HTTP response headers on behalf of a WSGI application.
353
354 Returns a threading event that will fire when the headers have been
355 emitted to the network.
356 """
357 event = threading.Event()
358
359 def _inner_send(stream_id, headers, event):
360 self.conn.send_headers(stream_id, headers, end_stream=False)
361 self.transport.write(self.conn.data_to_send())
362 event.set()
363
364 self._loop.call_soon_threadsafe(
365 _inner_send,
366 stream_id,
367 headers,
368 event
369 )
370 return event
371
372 def open_flow_control_window(self, stream_id, increment):
373 """
374 Opens a flow control window for the given stream by the given amount.
375 Called from a WSGI thread. Does not return an event because there's no
376 need to block on this action, it may take place at any time.
377 """
378 def _inner_open(stream_id, increment):
379 self.conn.increment_flow_control_window(increment, stream_id)
380 self.conn.increment_flow_control_window(increment, None)
381 self.transport.write(self.conn.data_to_send())
382
383 self._loop.call_soon_threadsafe(
384 _inner_open,
385 stream_id,
386 increment,
387 )
388
389
390class Stream:
391 """
392 This class holds all of the state for a single stream. It also provides
393 several of the callables used by the WSGI application. Finally, it provides
394 the logic for actually interfacing with the WSGI application.
395
396 For these reasons, the object has *strict* requirements on thread-safety.
397 While the object can be initialized in the main WSGI thread, the
398 ``run_in_threadpool`` method *must* be called from outside that thread. At
399 that point, the main WSGI thread may only call specific methods.
400 """
401 def __init__(self, stream_id, protocol):
402 self.stream_id = stream_id
403 self._protocol = protocol
404
405 # Queue for data that has been received from the network. This is a
406 # thread-safe queue, to allow both the WSGI application to block on
407 # receiving more data and to allow the asyncio server to keep sending
408 # more data.
409 #
410 # This queue is unbounded in size, but in practice it cannot contain
411 # too much data because the flow control window doesn't get adjusted
412 # unless data is removed from it.
413 self._received_data = queue.Queue()
414
415 # This buffer is used to hold partial chunks of data from
416 # _received_data that were not returned out of ``read`` and friends.
417 self._temp_buffer = b''
418
419 # Temporary variables that allow us to keep hold of the headers and
420 # response status until such time as the application needs us to send
421 # them.
422 self._response_status = b''
423 self._response_headers = []
424 self._headers_emitted = False
425
426 # Whether the application has received all the data from the network
427 # or not. This allows us to short-circuit some reads.
428 self._complete = False
429
430 def receive_data(self, data, flow_controlled_size):
431 """
432 Called by the H2Protocol when more data has been received from the
433 network.
434
435 Places the data directly on the queue in a thread-safe manner without
436 blocking. Does not introspect or process the data.
437 """
438 self._received_data.put_nowait((data, flow_controlled_size))
439
440 def request_complete(self):
441 """
442 Called by the H2Protocol when all the request data has been received.
443
444 This works by placing the ``END_DATA_SENTINEL`` on the queue. The
445 reading code knows, when it sees the ``END_DATA_SENTINEL``, to expect
446 no more data from the network. This ensures that the state of the
447 application only changes when it has finished processing the data from
448 the network, even though the server may have long-since finished
449 receiving all the data for this request.
450 """
451 self._received_data.put_nowait((END_DATA_SENTINEL, None))
452
453 def run_in_threadpool(self, wsgi_application, environ):
454 """
455 This method should be invoked in a threadpool. At the point this method
456 is invoked, the only safe methods to call from the original thread are
457 ``receive_data`` and ``request_complete``: any other method is unsafe.
458
459 This method handles the WSGI logic. It invokes the application callable
460 in this thread, passing control over to the WSGI application. It then
461 ensures that the data makes it back to the HTTP/2 connection via
462 the thread-safe APIs provided below.
463 """
464 result = wsgi_application(environ, self.start_response)
465
466 try:
467 for data in result:
468 self.write(data)
469 finally:
470 # This signals that we're done with data. The server will know that
471 # this allows it to clean up its state: we're done here.
472 self.write(END_DATA_SENTINEL)
473
474 # The next few methods are called by the WSGI application. Firstly, the
475 # three methods provided by the input stream.
476 def read(self, size=None):
477 """
478 Called by the WSGI application to read data.
479
480 This method is the one of two that explicitly pumps the input data
481 queue, which means it deals with the ``_complete`` flag and the
482 ``END_DATA_SENTINEL``.
483 """
484 # If we've already seen the END_DATA_SENTINEL, return immediately.
485 if self._complete:
486 return b''
487
488 # If we've been asked to read everything, just iterate over ourselves.
489 if size is None:
490 return b''.join(self)
491
492 # Otherwise, as long as we don't have enough data, spin looking for
493 # another data chunk.
494 data = b''
495 while len(data) < size:
496 try:
497 chunk = next(self)
498 except StopIteration:
499 break
500
501 # Concatenating strings this way is slow, but that's ok, this is
502 # just a demo.
503 data += chunk
504
505 # We have *at least* enough data to return, but we may have too much.
506 # If we do, throw it on a buffer: we'll use it later.
507 to_return = data[:size]
508 self._temp_buffer = data[size:]
509 return to_return
510
511 def readline(self, hint=None):
512 """
513 Called by the WSGI application to read a single line of data.
514
515 This method rigorously observes the ``hint`` parameter: it will only
516 ever read that much data. It then splits the data on a newline
517 character and throws everything it doesn't need into a buffer.
518 """
519 data = self.read(hint)
520 first_newline = data.find(b'\n')
521 if first_newline == -1:
522 # No newline, return all the data
523 return data
524
525 # We want to slice the data so that the head *includes* the first
526 # newline. Then, any data left in this line we don't care about should
527 # be prepended to the internal buffer.
528 head, tail = data[:first_newline + 1], data[first_newline + 1:]
529 self._temp_buffer = tail + self._temp_buffer
530
531 return head
532
533 def readlines(self, hint=None):
534 """
535 Called by the WSGI application to read several lines of data.
536 """
537 data = self.read(hint)
538 lines = data.splitlines(keepends=True)
539 return lines
540
541 def start_response(self, status, response_headers, exc_info=None):
542 """
543 This is the PEP-3333 mandated start_response callable.
544
545 All it does is store the headers for later sending, and return our
546 ```write`` callable.
547 """
548 if self._headers_emitted and exc_info is not None:
549 raise exc_info[1].with_traceback(exc_info[2])
550
551 assert not self._response_status or exc_info is not None
552 self._response_status = status
553 self._response_headers = response_headers
554
555 return self.write
556
557 def write(self, data):
558 """
559 Provides some data to write.
560
561 This function *blocks* until such time as the data is allowed by
562 HTTP/2 flow control. This allows a client to slow or pause the response
563 as needed.
564
565 This function is not supposed to be used, according to PEP-3333, but
566 once we have it it becomes quite convenient to use it, so this app
567 actually runs all writes through this function.
568 """
569 if not self._headers_emitted:
570 self._emit_headers()
571 event = self._protocol.data_for_stream(self.stream_id, data)
572 event.wait()
573 return
574
575 def _emit_headers(self):
576 """
577 Sends the response headers.
578
579 This is only called from the write callable and should only ever be
580 called once. It does some minor processing (converts the status line
581 into a status code because reason phrases are evil) and then passes
582 the headers on to the server. This call explicitly blocks until the
583 server notifies us that the headers have reached the network.
584 """
585 assert self._response_status and self._response_headers
586 assert not self._headers_emitted
587 self._headers_emitted = True
588
589 # We only need the status code
590 status = self._response_status.split(" ", 1)[0]
591 headers = [(":status", status)]
592 headers.extend(self._response_headers)
593 event = self._protocol.send_response(self.stream_id, headers)
594 event.wait()
595 return
596
597 # These two methods implement the iterator protocol. This allows a WSGI
598 # application to iterate over this Stream object to get the data.
599 def __iter__(self):
600 return self
601
602 def __next__(self):
603 # If the complete request has been read, abort immediately.
604 if self._complete:
605 raise StopIteration()
606
607 # If we have data stored in a temporary buffer for any reason, return
608 # that and clear the buffer.
609 #
610 # This can actually only happen when the application uses one of the
611 # read* callables, but that's fine.
612 if self._temp_buffer:
613 buffered_data = self._temp_buffer
614 self._temp_buffer = b''
615 return buffered_data
616
617 # Otherwise, pull data off the queue (blocking as needed). If this is
618 # the end of the request, we're done here: mark ourselves as complete
619 # and call it time. Otherwise, open the flow control window an
620 # appropriate amount and hand the chunk off.
621 chunk, chunk_size = self._received_data.get()
622 if chunk is END_DATA_SENTINEL:
623 self._complete = True
624 raise StopIteration()
625
626 # Let's talk a little bit about why we're opening the flow control
627 # window *here*, and not in the server thread.
628 #
629 # The purpose of HTTP/2 flow control is to allow for servers and
630 # clients to avoid needing to buffer data indefinitely because their
631 # peer is producing data faster than they can consume it. As a result,
632 # it's important that the flow control window be opened as late in the
633 # processing as possible. In this case, we open the flow control window
634 # exactly when the server hands the data to the application. This means
635 # that the flow control window essentially signals to the remote peer
636 # how much data hasn't even been *seen* by the application yet.
637 #
638 # If you wanted to be really clever you could consider not opening the
639 # flow control window until the application asks for the *next* chunk
640 # of data. That means that any buffers at the application level are now
641 # included in the flow control window processing. In my opinion, the
642 # advantage of that process does not outweigh the extra logical
643 # complexity involved in doing it, so we don't bother here.
644 #
645 # Another note: you'll notice that we don't include the _temp_buffer in
646 # our flow control considerations. This means you could in principle
647 # lead us to buffer slightly more than one connection flow control
648 # window's worth of data. That risk is considered acceptable for the
649 # much simpler logic available here.
650 #
651 # Finally, this is a pretty dumb flow control window management scheme:
652 # it causes us to emit a *lot* of window updates. A smarter server
653 # would want to use the content-length header to determine whether
654 # flow control window updates need to be emitted at all, and then to be
655 # more efficient about emitting them to avoid firing them off really
656 # frequently. For an example like this, there's very little gained by
657 # worrying about that.
658 self._protocol.open_flow_control_window(self.stream_id, chunk_size)
659
660 return chunk
661
662
663def _build_environ_dict(headers, stream):
664 """
665 Build the WSGI environ dictionary for a given request. To do that, we'll
666 temporarily create a dictionary for the headers. While this isn't actually
667 a valid way to represent headers, we know that the special headers we need
668 can only have one appearance in the block.
669
670 This code is arguably somewhat incautious: the conversion to dictionary
671 should only happen in a way that allows us to correctly join headers that
672 appear multiple times. That's acceptable in a demo app: in a productised
673 version you'd want to fix it.
674 """
675 header_dict = dict(headers)
676 path = header_dict.pop(':path')
677 try:
678 path, query = path.split('?', 1)
679 except ValueError:
680 query = ""
681 server_name = header_dict.pop(':authority')
682 try:
683 server_name, port = server_name.split(':', 1)
684 except ValueError:
685 port = "8443"
686
687 environ = {
688 'REQUEST_METHOD': header_dict.pop(':method'),
689 'SCRIPT_NAME': '',
690 'PATH_INFO': path,
691 'QUERY_STRING': query,
692 'SERVER_NAME': server_name,
693 'SERVER_PORT': port,
694 'SERVER_PROTOCOL': 'HTTP/2',
695 'HTTPS': "on",
696 'SSL_PROTOCOL': 'TLSv1.2',
697 'wsgi.version': (1, 0),
698 'wsgi.url_scheme': header_dict.pop(':scheme'),
699 'wsgi.input': stream,
700 'wsgi.errors': sys.stderr,
701 'wsgi.multithread': True,
702 'wsgi.multiprocess': False,
703 'wsgi.run_once': False,
704 }
705 if 'content-type' in header_dict:
706 environ['CONTENT_TYPE'] = header_dict.pop('content-type')
707 if 'content-length' in header_dict:
708 environ['CONTENT_LENGTH'] = header_dict.pop('content-length')
709 for name, value in header_dict.items():
710 environ['HTTP_' + name.upper()] = value
711 return environ
712
713
714# Set up the WSGI app.
715application_string = sys.argv[1]
716path, func = application_string.split(':', 1)
717module = importlib.import_module(path)
718APPLICATION = getattr(module, func)
719
720# Set up TLS
721ssl_context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
722ssl_context.options |= (
723 ssl.OP_NO_TLSv1 | ssl.OP_NO_TLSv1_1 | ssl.OP_NO_COMPRESSION
724)
725ssl_context.set_ciphers("ECDHE+AESGCM")
726ssl_context.load_cert_chain(certfile="cert.crt", keyfile="cert.key")
727ssl_context.set_alpn_protocols(["h2"])
728
729# Do the asnycio bits
730loop = asyncio.get_event_loop()
731# Each client connection will create a new protocol instance
732coro = loop.create_server(H2Protocol, '127.0.0.1', 8443, ssl=ssl_context)
733server = loop.run_until_complete(coro)
734
735# Serve requests until Ctrl+C is pressed
736print('Serving on {}'.format(server.sockets[0].getsockname()))
737try:
738 loop.run_forever()
739except KeyboardInterrupt:
740 pass
741finally:
742 # Close the server
743 server.close()
744 loop.run_until_complete(server.wait_closed())
745 loop.close()
You can use cert.crt
and cert.key
files provided within the repository
or generate your own certificates using OpenSSL:
$ openssl req -x509 -newkey rsa:2048 -keyout cert.key -out cert.crt -days 365 -nodes