E-mail Marketing (ESP)
David Addison
by David Addison
share this
?fl
« Back to the Blog

Building an Email Service Provider (ESP) Marketing Automation Platform

02/28/2017
Building an Email Service Provider (ESP) Marketing Automation Platform

The Remake of Dirigo's Messaging Platform -  Konvey.com

This blog post served as a running progress report on our 'Messaging Platform / ESP Build' effort until January 2, 2017. By no means is development finished. If you'd like to take a look at what we built, give us a ring or head to www.konvey.com.

In early March 2015 Dirigo set out to retool our bulk email sending service or commercial ESP. Our goal is to build an application able to send 10-20 billion messages per year (prior to scaling the system up) for thousands of customers with advanced segmentation, marketing automation, and closed-loop marketing features — the complexity of which is only offered by a handful of the largest email marketers. And our goal is to do it with bigger data and more agility. We're using the same underlying messaging system (e.g. the emailing server technology) as ExactTarget, Campaigner, MailChimp, Responsys... The plan is to be Alpha testing a new email marketing platform by January 2016 [we pushed that back to late Fall 2016 and then again to March 25, 2017 because the scope of this project changed — from minimally viable to fully featured].

Building your own ESP is not an easy thingy. It requires a deep understanding of DNS, email protocols, load balanced hardware, mail transfer agents, feedback loops and bounce management, reputation, email abuse monitoring, blacklists, e-mail templates, list segmentation and so on. To do it the right way you'd need an industry insider that has already built a successful ESP, a whole lot of time, and real management commitment.  We're going to be breaking-in to a difficult and somewhat mature market with feature-rich product offerings aimed at the most sophisticated campaigns/marketers. This ain't no Constant Contact .

Well — we recruited Peter who has more than enough technical know-how and ESP specific commercial experience to get the job done. Peter engineered and built a large ESP platform (e.g. thousands of clients) for use in the restaurant vertical and has been sending high volumes (e.g. billions) of email for more than a decade.

Progress Updates: 

May 5, 2017 - System documentation and weekly updates to the platform continue. We're adding functionally as needed.

January 2, 2017 [Week 96]

The time to take the application into production mode is here.  Ninety-six weeks. We'll pause for a brief moment to celebrate and then resume work...  

The development team wanted more time to test and hone. Not unreasonable or unexpected. Management wanted to be live because Konvey has been almost two years in the making. As a compromise, we agreed that Konvey will remain mostly a fully managed solution (by internal staff) until the end of March 2017 as we operationalize and document Konvey.

December 26 [Week 94 & 95]

  • Unit testing is underway as we prepare to take Konvey live for internal (Dirigo fully-managed) use. Konvey.com will be pushed to production in the first few days of 2017. Our test client – a SalesForce/iContact user will being using Konvey in January.
  • Data warehousing and reporting are now minimally viable and production ready.
  • PowerMTA configuration settings have been reconfigured, organized, and abstracted to separate include files.  A new internal /24 subnet has been configured for commercial email.  Residual settings from the past few years have been cleaned-up.
  • Feeback Loops for MSN, Yahoo, Google, SBC Global, UnitedOnline, Time Warner/RoadRunner… are in place.  We’re still working on our new /23 IP address allotment from ARIN.  We’ll go live with our current IP address ranges.
  • New web service exception handling in the Admin UI, for displaying more detailed error messages.
  • Bug fixes for the filtering UI.
  • First draft of the recipient preview functionality on the Edit Schedule screen, which necessitated lots of refactoring to the filtering and contact loading logic.
  • The recipient preview UI on the Edit Schedule screen is complete. Minor bug fixes to the filtering UI. All screens that display a grid of contacts now allow those contacts to be edited as well.
  • Added "single send" functionality, accessible from the Find Contacts screen.
  • Fixed uploads bugs in Export/Import, multi-threading bugs in the Import and Mailing Engines. Added the ability to cancel a broadcast in progress.
  • Bug fixes to multi-threading in the Import and Mailing engines. New method to construct TransactionScope objects using the ReadCommitted isolation level and no timeout.
  • Tweaked multi-threading settings in the Mailing and Schedule Update engines.
  • Fixed a bug in the ContactLoader that caused the MaxResults count for related table data to be applied to the entire dataset instead of per-contact.
  • New progress bars on the Manage Broadcasts screen. New ability to copy schedules.
  • Multithreading enhancements to the Mailing Engine and Contact Loader.
  • Fixed a bug in which mailings were unintentionally inheriting SQL transactions. Added polling to the Manage Broadcasts screen to check for imminent broadcasts.
  • The Import Engine can now be paused, and there's now a web service method in the Service project that will Pause() and Continue() all services.

December 12 [Week 93]

A good chunk of week 93 and 94 will be devoted to the Konvey data mart and reporting. 

December 5 [Week 92]

  • The data warehousing topology is complete and we're scripting the database tables and the warehousing jobs. The warehouse will reside on a different database. This is very much a textbook implementation of warehousing concepts, design, and data integration.
  • Feedback Loops (FBL) for Yahoo, Microsoft, and GMail have been activated.
  • ARIN pre-approval for two /24s blocks underway. 
  • Join landing page(s) "design skinning" test is complete.

November 28 [Week 91]

  • On the landing join page, changed the list dropdown to a standard HTML SELECT element, and added required validation to the dropdown.  Greg reported issues with the previous element during testing.
  • Modified Dirigo.Mail.Web so that /content maps to a network share in lieu of an IIS virtual directory.
  • Added comment-generation to the public API builder UI. 
  • Added middleware that adds an HTTP response header to assist with load balancing troubleshooting.
  • Finished (for now) the Public API form builder UI.

November 21 [Week 90]

  • Prototype Email Editor.
  • Prototype reporting.
  • Enhancements to the public API and the public API form generator, including a new unsubscribe.

November 14 [Week 89]

  • Updated all NuGet packages to latest versions.
  • Upgraded to .NET Core 1.1.0 RTM.
  • Fixed bugs in the new MSMQ dequeuing functionality for Messages.
  • Fixed bugs with automatic schedule updating of non-recurring mailings.
  • Implemented a new MSMQ failover strategy for inserting rows into the Messages table.
  • Finished the implementation of the new tagging UI on the Edit Contact and Configure Import screens.
  • Added new grouped tag selection UI in the Public API and Edit Contact pages.
  • Updated TypeScript typings.

November 7 [Week 88]

  • Significant enhancements to the UI for generating the public API sample page.
  • Completed the base functionality of the public-facing API. Created an  admin-facing UI for building embeddable forms that target the Public API. Greg tested the form and API code on a staging website .
  • Created a new Contact Web Service (no functionality yet), with CORS middleware enabled.
  • Fixed a security issue in which landing pages could override internal-only fields in the database.

October 31 [Week 87]

  • Building the API - this task is going to take multiple weeks to build-out functionality.
  • Planning reporting interfaces.
  • Tags can now be added and removed via query string parameters on the public-facing join & profile pages, as well as from JavaScript.

October 24 [Week 86]

  • Building tags — Tags provide an easy way to group Contacts without adding new Profile Fields. Both Tags and Tag Groups can be used as filters for segmentation and for importing/exporting. Tags can be used to append data to a contact records such as "2014 Annual Pass Holder", "2012 Annual Pass Holder", "June 2016 Hotel Guest", "High Roller", "Beer Fest Attendee", etc.  Tags are great for tagging guests to special events or for Konvey users that follow a pattern of "upload list / send" (e.g. the list is imported for each and every send).  Tags can also be used to append source attributes such as "Website Optin" or "Wedding Optin".  Tags can help to easily turn a colossal in-house list into groups that share a unique and exploitable need or interest.
  • Bug fixes to the merge engine & mailing engine, and the Verify Phone Number UI.
  • Removed project references to System.Web; switched to encoding routines in different libraries. Minor modifications and enhancements to the Merge Engine.

October 17 [Week 85]

  • Unit testing some of the many Konvey merge codes .
  • Created a new Preview Landing Page screen.
  • Refactored the antiforgery code as middleware, and modified it to account for multiple logins in the same browser session.
  • Implemented antiforgery for Angular pages as well as Razor views.

October 10 [Week 84]

  • On the Manage Mailings screen, created functionality to copy an existing mailing.
  • Fixed a bug in the FilterSubscriptionExpression class. 
  • In the MailContextFactory class, provided more explicit error handling for invalid database instances.
  • Built logic to force Production version of the application to NOT use minification and concatenation of JS and CSS files when a specific x-header is detected.  This will aid with diagnosing and fixing bugs.
  • Setup DNS for ISP Feedback Loop use.

October 3 [Week 83]

A Gulp task was deployed to combine and minify JavaScript files. Gulp is a streaming build system. We used it to combine, minify, and version JS files for production & to combine and version JS files for the QA environment. This took some time because the JavaScript load order is crucial to the application. Files are concatenated in the order that they are specified in the gulp.src function.

We finished the Konvey functionality for managing Sites, Hostnames, IP Address, and Registration Codes, and published the changes to QA. Hostnames and IP address can’t be updated, only inserted and deleted because they're primary keys (in addition to SiteID). The Entity Framework won't update these. Since this is a Konvey staff admin function -- no biggie (it’s not client-facing): deleting and then re-inserting accomplishes the same thing. While it’s now possible to add a new Site through the UI, we did not build a UI for deleting (because of the danger). Deleting a Site (and all the related tables) directly from SQL Management Studio will be our delete method.

September 26 [Week 82]

  • The Jint repository has been added as a submodule. Jint is a Javascript interpreter for .NET which provides full ECMA 5.1 compliance and can run on any .NET platform.
  • The Merge Engine now supports CLR integration, so that server-side Javascript can call .NET code. The common language runtime (CLR) is the heart of the Microsoft .NET Framework and provides the execution environment for all .NET Framework code.
  • Begin the process of setting up new Feedback Loops.
  • A basic Konvey admin area -- that allows our staff to setup and configure new Konvey users -- is being built.

September 19 [Week 81]

  • 10, 9, application start, 6, 5, 4, 3, 2, 1, zero. All systems running. Liftoff! We have outbound email! Forty-four minutes past four on 9/20/2016. Liftoff on Konvey 1.0.0. Honestly, we didn't think - with so many settings and servers and room for application error - that it was going to go.  Eighty-one weeks to a single blast from the QA server.  And no time to spare.  Our first large client will come on-board before the end of the year.  There is much work still to be done.
  • Migrated Power MTA to a StarWind cluster.  The physical hardware has been burning-in for about 8 weeks.  Lots of firewall and network configuration changes were required. http://esp.dirigodev.com is now using the new redundant solution.
  • Working on the relay of email for the development, QA and staging environments.  We don't want any email to be relayed in these environments -- except to konvey.com, dirigodev.com, internal IP addresses, or other configured testing exceptions.  This will help to prevent erroneous sending. Getting this to work is quite complicated.
  • Working on load balancing and 'failure' logic.  Lots of effort has gone into recovering from failures and graceful shutdown.  What happens when SMTP port 25 times out in the middle of a distribution?  High Level: The system switches to a pickup directory (a UNC File Share) and begins to throttle down because the pickup directory will be overwhelmed in under an hour (e.g. when writing >10MM emails per hour a single directory has some limitations).  If port 25 is not working the OS must move the SMTP service to its counterpart in the cluster.  This can take 20-60 seconds.  The same goes for the app itself.  If the app is receiving a 500 error then the system will auto migrate to its pair (e.g. another server).  During brief cutovers the system needs to respond appropriately and notify internal developers and staff. The goal is to deliver 100% of the email during a failure. There is a ton of logic/code wrapped around failure, preparing to go offline, code publishing, and shutdown scenarios.
  • The ski e-commerce system -- and DirigoEdge -- is undergoing it first round of load testing.  Konvey had performance 'love' and load tests throughout the build. Systems are not fast if they're not built for speed.
  • We're fixing some Message Queuing (MSMQ) issues.  The name 'Message Queuing' is odd because MSMQ has nothing to do with e-mail or texting. The technology goes back to Windows NT (Circa 1999) and Message Queuing is still heavily used. Because the volume of writes to the database is extreme the application queues request into MSMQ.  Every x number of seconds a SQL Transaction is created to bulk insert into the database - we pickup lots of performance gain from queuing requests.
  • Bounce processing is not working properly. Not a huge surprise. Bounce processing was an early deliverable - June 2015.  Moreover, BoogieTools API is not installed on the new server.
  • Fixed bugs in the Tracking Engine and the Installer Service.
  • Fixed bugs in the Add Landing Page screen. Various environmental changes related to deployment of the QA environment.
  • PowerMTA is being upgraded to v4.5.
  • Modified some X- headers in the SMTP Engine. Overhauled exception handling in the Incoming Mail Engine.
  • Snippets can now be defined within mailings, allowing site-level snippets to define template content and mailings to contain just mailing-specific content.
  • Various fixes and exception handling enhancements to the click tracker and incoming mail engine.

September 12 [Week 80]

  • BIG MILESTONE: The application has been installed into the QA environment --  http://qa.konvey.com is only accessible for approved IP addresses. Up to this point everything has been done in the DEVELOPMENT environment.  The QA environment will provide a permanent place for testing and will put us into a DEVELOPMENT > QA > STAGING > PRODUCTION publishing pipeline.  The installation scripts are being refactored because they were last updated over a year ago -- this task might take the rest of this week. 
  • Testing the application and filling in gaps where they exist.
  • Load balancing roll-out for DirigoEdge.
  • Meetings with DirigoEdge Ski E-commerce Application Team to work on interoperability with Konvey.

September 5 [Week 79]

  • Testing the application and filling in gaps where they exist.
  • Load balancing.

August 29 [Week 78]

  • This website (dirigodev.com) was our first production website to be load balanced - it is being actively served by multiple servers. Sugarbush.com also moved to into the Konvey cloud (that being the entirely new cloud hosting environment built for Konvey.com) this week.
  • Because we have production applications running on the new system we spent all of this week flushing out issues and testing the setup. We also reworked some of the security and launched new backup plans.
  • Infrastructure for dynamic content is now in place, but the corresponding SQL is not.
  • Dynamic filtering, hereafter called "content filtering" (?) has been fully implemented.

August 22 [Week 77]

  • Completed the load balancing engine and health test controllers.
  • Added administrative panel to control if a website (or complete servers with many websites) are load balanced.  The admin screen controls the health monitoring page(s).  Locked down health monitoring functions with Regex so that it is only available to inside traffic. 
  • Added domain controller user authentication for admin.
  • Working on directory hierarchy for SAN image repository.  Test to see if IIS restart will be an issue when adding or deleting virtual directories and large numbers of images/files. 
  • Most everything this week has dual application - the same methods used for Konvey are being deployed to DirigoEdge 3.x.

August 15 [Week 76]

  • Continued work integrating Serilog into the Konvey project. Added the ability for Serilog to log a RequestUrl property (for log events that occur within an HTTP context).
  • Refactored DirigoEdge CMS 3.x Core to centrally log events.
  • Working on deployment: building a health monitoring engine which will be used to turn on or off load balancers.
  • Helped Dirigo to pull two more servers from the host center - we've taken 17 servers out this year and installed 9 new.  We have 5-times or more our old capacity.  

August 8 [Week 75]

This week we livened up the new SQL cluster, migrated data, and reworked  disaster recovery and backup. We also replaced Apache Log4Net with SiriLog.net ( a diagnostic logging .NET library) and worked on collecting actionable and insightful log events from different parts of the application. What happens if a server fails or a process is cut mid-stream? Answer: The event is logged and we recover gracefully from the event.

As we move our systems to a higher level, learnings from the Konvey project are being deployed enterprise-wide. Our DirigoEdge Ski Resort Platform and Cloud Hosting offerings are being retooled along the way. Doing all this at once has slowed development of Konvey by > 90 days. Example: We're going to centrally log all events into a single location here at Dirigo - the ramifications being that we need to modify dozens of websites. Another example: We moved more than 40 databases to the new SQL cluster costing > 2 days of development resources.

Our GitHub "Pulse" statistics for the week: 3 commits to master, 42 files have changed and there have been 919 additions and 558 deletions. For the past 30 days: excluding merges, 9 commits to all branches. On master, 131 files have changed and there have been 3,187 additions and 1,420 deletions. Since the beginning of the project: 193 commits, 1,551,126++ / 1,469,840--.

Feature changes:

  • Added Cancel and Wait Methods to various engines and modified the services to gracefully handle stop and shutdown events.
  • Modified the batch files that launch the VS projects so that they can take environment variables.
  • Changed IP addresses so that they can be configured on a per-machine basis.

August 1 [Week 74]

  • All outgoing emails are now formatted with UTF-8.
  • Implemented new List-Unsubscribe and Subscribe functionality. 
  • ServerFarmId is now included in all outgoing URLs and bounce/unsubscribe addresses.
  • Added support for the "online version" of a mailing, using the Mailing() and PersonalizedMailing() merge codes. 
  • Modified the Edit Mailing page to use Codemirror instead of CKEditor. 
  • Added the ability on the Preview Mailing screen to preview as an anonymous online user.
  • The various landing page modes -- Join, Profile, Unsubscribe, Unsubscribe All, and Resubscribe -- are now fully functional and featured.
  • Added the ability to view unpersonalized HTML template pages online using tags in the email template.

July 25 [Week 73]

  • Racked new 48 port switch, racked PRODDB1 & PRODDB2 servers. setup more 10GBe network links, ganged NIC adapter for failover, reworked Veeam Backup & Restore replications.
  • Added the ability for the Merge Engine to process HTML fragments (instead of entire documents). 
  • Created the Unsubscribe and Confirmation landing pages, along with back-end functionality.

July 18 [Week 72]

  • Modified common profile fields (e.g. EmailAddress) to expose FieldType metadata in the same way as custom profile fields. 
  • Added support for drop-down and radio-button fields, along with client-side validation.
  • Added support for regex-based validation and replacement of common profile fields (so that they're consistent with custom fields), via custom entity attributes and database properties.

July 11 [Week 71]

  • Our Mail Transfer Servers are configured:  Each MTA will be a pair of Dell PE R610's with dual six core processors, 48GB RAM, H700 controller, RAID-1 146GB for OS and Intel DC S3610 480GB SSD for PMTA.  The remaining three HDD slots are being used for backup file storage (Veeam replications) - they'll be Seagate SAS 10K 1.8TB drives.  No need for RAID since they're managed by Starwind virtual SAN software. The servers are configured into a Windows 2012 R2 Cluster. This gives us faiover/high availability for the PMTA engine. The SSD volume is managed under Starwind with a 10GbE Twinax connection and teamed NICs. If any single server fails the system will correct itself. If we need to take a system offline for maintenance she'll continue to send email. We're routing data across multiple subnets so that we can monitor QoS.  We're very certain that we'll be able to sustain ~10 millions sends per hour per MTA.
  • Peter is still refactoring pieces of the application.  He's been working on input masks for different field types (e.g. phone number USA, international phone numbers, state and country drop down boxes, date types, etc.). Each Konvey user will have lots of options to customize their data. 

July 4 [Week 70]

Our development resources are canoeing the backwoods of Maine this 4th of July week.  Everyone needs a break. No development this week.

June 27 [Week 69]

  • Converted from ASP.NET Core RC2 to RTM.
  • Converted static AutoMapper methods to instance-based ones.

June 20 [Week 68]

  • DNS Servers (NS1.KONVEY.NET, NS2 and NS3) put online.
  • Working on backup/replication process using Veeam.
  • App and PowerMTA Dell PowerEdge servers have arrived.  We're going to run these as a StarWind SAN.
  • New patch cables arrived - we're going to rework wiring to clean up 5 years of mess.
  • Additional functionality in the "join"-mode landing pages.

June 13 [Week 67]

We're still mucking around with hardware.  The SAN, file storage, and SQL servers are in good form.  We're turning our attention now to the App and PMTA servers (hoping to have these ready in about 2 weeks).  We did manage to get some programming done this week.  We'll have the platform running within two to three months.

  • Modified the batch files that install Dirigo.Mail.Service to account for RC2. Added support for specifying website bindings using the server.urls parameter (from config, command line, or environment variables).
  • Created batch files to launch the various projects using the "dotnet" command introduced in RC2.

June 6 [Week 66]

We've been preparing the hardware for our ESP service (as well as other SaaS offerings at Dirigo).  For the last several weeks this has felt like a cloud hardware project because we're in a hardware phase again.

  • New MS SQL 2016 clusterd server is racked
  • Storage Area Network (SAN) is racked
  • 10GBe Network Switch is online
  • Two of the three new VMWare hosts got racked

May 23, 2016 [Week 64]

The unexpected: moving from .NET Core 1.0 RC1 to RC2 took about 4 days. We got most everything rejiggered and then on Tuesday this week Microsoft un-did many of the changes. OMG! Not good.  Rather than chasing changes we're back to fine tuning the application. Toward the end of next week we'll be racking the server hardware. From now until then we're loading data and testing the system. We'll be refining the system for a few months still.  Sending highly segmented and personalized email - which is what we're doing with Konvey - is rock solid. 

May 16 2016 [Week 63]

.NET Core 1.0 released this week. RC2 is a supported and production-ready cross-platform release.  Moving to RC2 and then the final release in late-June should not be that difficult (2-3 days work).  Our gamble to port the code 6 months ago is definitely a win. Had we continued development on .NET 4.5, a port to the new code would probably not be within our means at this stage.

On Wednesday this week Victoria Kuhn [Partner] got a 2.5 hour demo of the platform. That viewing was cut short because she had another meeting. The session was shot on video so that we can data mine the demo as we hone our strategy.

Peter is adding test data and templates so that he can expose software bugs. Monday and Tuesday were spent preparing the application for its first real showing.  More work is needed for the client landing pages and reporting.  Also, CKEditor is not interacting well with HTML e-mail templates.

April 25 to May 9 2016 [Week 60 - 62]

We’ve been sidelined for the last several weeks working on non-ESP tasks that impact standing up our new platform. Our project, which still needs further refinement, is ready to be installed in our data center.

Over the past few weeks we have removed 13 dedicated servers and virtualized those machines onto more powerful Dell PowerEdge’s. The plan is to have 6 new VM Hosts. We’ll eventually remove another 10 or so dedicated servers. The big issue is that we’re using too much power and cooling and have not been using our servers efficiently. We wanted to keep our same footprint, but, with a lot more computing power. We needed to free-up space to make room for the ESP project. We’re essentially replacing 6+ years of infrastructure. It’s a time consuming operation.

As a byproduct of vitualization we partnered with Veeam to become a cloud partner. This necessitated a complete rework of our backup and recovery operations.

The SAN is almost ready for racking. We’re still waiting for a few more hard disks and 128GB of RAM to arrive. The systems should be racked before month’s end. The SAN will take 4U of rack space and about 8 amps of power. As configured it will accommodate ~500GB or RAM and 64 hard disks (a blend of SSDs, 10K SAS, and 7.2K SATA – all enterprise class).

On 6/1/2016 the all new MS SQL Standard 2016 database software is being released. Using 2016 was part of our plan. Setting up and testing a clustered database server is time consuming.

Two new switches with 10Gbe are configured and waiting for installation. We needed the 10 gig network to support the Storage Area Network. The plan is to eventually upgrade all VM Hosts to 10 gigs. Until then we’ll deploy more network teaming.

Totally unrelated, Peter got tasked testing a Dirigo Ski client who uses DirigoEdge with new load balancers, redundant web servers, and a new virtual directory for assets (images and documents) housed on the SAN. The test necessitates small changes to the DirigoEdge software to facilitate load balancing. These changes need to happen now because a major release of DirigoEdge is being tested. We’re rolling from v2.x to 3.x. Essentially, we’re moving our DirigoEdge ski client to a similar setup as the ESP in an effort to achieve zero downtime. This change requires modifications to our installation scripts and our entire software publishing model.

By hosting the ESP from our own facility we massively reduce our monthly operating costs. It has always been important to keep the 'burn rate' on the ESP project within our ability. We need to built the ESP 100% correct and in-line with our initial vision—without time and money mucking things up. Stepping up Dirigo's infrastructure was part of the plan.  The minimum month cost to run the ESP in AWS is > $10K per month.  The monthly cost to run a fully scaled platform in our host center is zero outside of some initial fixed cost. That's because we already have operatings that cover the costs. Our only added cost to scale to 10 billion emails per month will be bandwidth.  Okay, not totally true because at 10 billion we'll want more sophistication/safeguards.  But you get the point...

And we're still working on client landing page features.

April 18 [Week 59]

This is a half week for the programming crew due to scheduled vacation time.

We continue work on the client landing page features. We want default opt-in and out functions, the ability to add custom form fields that write to user defined database fields, crazy personalization, user defined design overrides, versioning (so that we can move from v1.0 to 2.0... without the need to change existing client setups), and a bunch of other features. The " under the hood " plumbing here is time consuming because the application roadmap needs to be very well planned and executed.

10Gbe network adapters, SPF's, and cables are inbound from vendors so that we can test a new pair of Dell servers. We've yet to test the virtual Storage Area Network (SAN). Last week we were accepted into the Veeam cloud storage partner program. This gives us a new set of virtualization tools. The marketing website is now about half finished.

April 11 2016 [Week 58]

This week we're working almost exclusively on the opt-in and unsubscribe functions. Client will have the option of using Konvey hosted email preference pages for managing contact opt-in and -out. The URL structure of these pages is critical because once launched the structure will be difficult to change. The parameters contained in the URL are responsible for routing and loading dynamic data. Security is a huge concern because we don't want bots scraping content. So we've baked in a hash, salted encryption, cross server encryption keys, server farms, etc. The dynamic preference pages need to be flexible. Contacts are the recipients of the emails. Each contact must have an email address that’s unique within a site. Profile fields are the categories of information about your contacts. All sites contain a common set of standard profile fields—such as EmailAddress, FirstName, and LastName--but with Konvey you can create as many custom profile fields as you'd like. Lists are named categories to which contacts can subscribe, and from which they can unsubscribe. When you create a new list (e.g. "Newsletters" or "Special Offers"), you're simply creating a new category: you're not creating a new copy of your contact data. Each site contains its own database of contacts, its own custom profile fields, its own settings, its own mailings, etc. It's not likely a client will need more than one site, since Konvey supports flexible branding and segmentation within a single site. Related tables contain data that are proprietary to a particular business but that can't be stored in profile fields. Transactional data such as purchase history, for example, can't be stored in custom profile fields because each contact is likely to have more than one corresponding purchase. Related tables can be used for filtering and for content personalization. Like other pieces of the Konvey application, this body of work is not as easy as one might think.

April 4 2016 [Week 57]

  • Begin work on landing pages: the email recipient facing portion of the application (e.g. join and unsubscribe pages).  
  • Converted the Web application to a Single Page Application (SPA). SPAs are Web apps that load a single HTML page and dynamically update that page as the user interacts with the app. SPAs use AJAX and HTML5 to create fluid and responsive Web apps, without constant page reloads. Much of the work was already happening on the client side in JavaScript.
  • Baked-in a server farm layer so that we can scale-out horizontally. 

March 28 2016 [Week 56]

We're still testing and refining the applications. Global and mailing level snippets were added to the user interface. There are far too many small and medium changes to enumerate. We've been in the email marketing business for 18+ years and we've never seen anything as sophisticated as what we built into Konvey.

For those that have access, the system is running inside of our Corporate network at  http://peter-1.corp.dirigodev.com/  or IP ending octet 111. This is not a public address. To create an account you need a registration code - similar to DirigoEdge. We'll be moving the code to a staging server once we're a bit more buttoned up. Our deployment scripts were obliterated with the port to .NET Core 1.0 and this is not a simple setup.  We'll need to create new deployment code.

March 7 2016 [Week 54]

We're starting the first end-to-end test of the application. This is a test, fix, and refine exercise. We have not polished the UX or design yet. Things are very much Bootstrap + Angular looking. We've got lots more work to do creating an API, template editor, enhanced reporting, etc. We'll begin sending email before the end of the year.

February 29, 2016 [Week 51 to 53]

  • The UI for creating recurring and non-recurring schedules is now complete.
  • Validation logic—both client- and server-side—has been overhauled.
  • Date/time values are now fully internationalized, with automatic detection of browser settings.
  • A new AngularJS-friendly date/time picker component has been integrated on many different screens.
  • A new Broadcast screen has been created (Broadcast = “Blast” or “Send”); it can display all broadcasts for the site, or just those for a specific mailing or schedule.

February 8, 2015 [Week 50]

We've buttoned up most of the scheduling interface. The Expresso 3.0 (regular expression development tool) was a lifesaver. The piece that we're finishing this week is the UTC offset for time zones.  This work was not 'a box of chocolate.'  We know that keeping schedules expressed as a Cron will circumvent pitfalls experienced with past email marketing platforms. The little details matter here because not accounting for leap seconds, leap year, and daylight savings time can be a source of duplicate mailings. Like other areas of our ESP application this was difficult functionally to build.

February 1, 2016 [Week 49]

Most of the week was devoted to the front-end interface for scheduling. The front end will make a Cron Job from the recurrence patterns below and write it to a SQL table.  Every 60 seconds a scheduled task will evaluate thousands of Cron Jobs and when appropriate set a next distribution event in SQL. The distribution table scheduled task will then move jobs into a queuing engine.  The code is designed to handle more than 100K schedules.  From save to sending e-mail will happen in roughly 2 minutes.

Recurrence patterns are being designed for:

  • Now (no recurrence)
  • A given date/time/timezone (no recurrence)
  • Every Hour
  • Every x Hours 
  • Daily
  • Every x Days
  • Weekly Recurring Every x Weeks on: One or More Checked (Sun, Mon, Tue, Wed, Thurs, Fri, Sat)
  • Monthly Day x of every x Months
  • On the (first, second, third, fourth, last) (day, weekday, weekend day, Mon, Tue, Wed, Thur, Fri, Sat) of every x months
  • Yearly
  • Yearly recurring every x Years on (Jan, Feb, Mar, Apr, May, Jun, July, Aug, Sep, Oct, Nov, Dec) x Day
  • Yearly on the (first, second, third, fourth, last) (Sun, Mon, Tue, Wed, Thurs, Fri, Sat) of (Jan, Feb, Mar, Apr, May, Jun, July, Aug, Sep, Oct, Nov, Dec)
  • Start Day No End Date
  • Start Day End after x Occurrences
  • Start Day End by Month/Day/Year

Prior to writing our own code we evaluated the Quartz Enterprise Job Scheduler (Terracotta, Inc), but, in the end it did not conform to our requirements.  We're using a little piece of Quartz in our application. 

We finished our ARIN (American Registry for Internet Numbers) hostmaster setup and began pre-approval request for the acquisition of a /23 (512 IPv4 addresses).  IPs will be acquired in an auction from Hilco Streambank at a cost of ~ $6,500.

January 25, 2016 [Week 48]

  • Continuation of week 47 tasks.

January 18, 2016 [Week 47]

  • This week we'll be working on the interfaces required to construct an e-mail message - from, subject, HTML blocks, test sending, scheduling.  You got it - it took 46 weeks before we got to the simple obvious stuff.
  • Twitter & Facebook accounts created.  Waiting on the domain name which is in escrow.

January 11, 2016 [Week 46]

  • Work continues on  interfaces required to construct an e-mail distribution - from, subject, HTML blocks, test sending, scheduling... We explored integrating the scheduling feature with SQL Server Agent but found limitations with recovering jobs under certain use cases. Your typical ESP has send 'Now' or 'Schedule for Date / Time / Time Zone' scheduling features. Scheduling 'tomorrow' or 'next business day' is also well within reach for most ESPs. Now c onsider that the USA has 6 time zones. Let's take a use case that has a campaign alerting ski lift ticket prospects about a midnight sale that begins at noon PST. That's 3 pm EST. Handling time zones correctly is crucial for international marketers, where "Noon Today" in California is 6 am tomorrow in Sydney. Scheduling is far more difficult than you might think.
  • On the 14th several of the Partners met.  The outcome of that meeting: keep it innovative, serve markets that cannot be served by others, aim the service at marketing experts, don't invest heavily in a WYSIWYG templating system just yet, a robust API is necessary from the outset, begin to integrate with DirigoEdge Ski ASAP. 
  • As of January 14th the port to .NET5 is complete.  We're moving forward again.
  • This week we attempted to push the application to Entity Framework 7.  We cannot use EF7 for the entire application until Sub Queries and Group by Translation to SQL is complete.  These are critical O/RM features that are on the EF7 backlog list.  EF7 is required for the .NET5 Identity Framework - so we're using both EF6.x and EF7.
  • We still need to fix a few RAZOR views. That should be the last of the known .NET5 conversion issues.  This week we should start moving ahead again.  
  • Ryan finished some of the new branding.  The new website will be named www.Konvey.com . Convey is a verb that means to transport or carry to a place; to carry someone or something from one place to another; to make something known to someone.  We're using the letter K because it makes the mark unique. It is very difficult these days to find a short top level domain (TLD) that is clean on the search engines and easy for customers to remember and spell. Konvey is our ticket.

January 4, 2016 [Week45]

Happy New Year!   Marketing website design and business strategy work is now underway.  Ryan Dolan has joined the project. Development will remain focused on the port to .NET5 RC1. Making our ESP compatible with .NET Core means that major parts of unfinished code needed to be rewritten at a cost of ~120 hours. We made a strategic decision that an upgrade now would be less costly than next year or even 2-4 years down the road. Once we put customers on the platform the cost to upgrade will be 10-20x the current cost. This is a very complicated set of applications and our first conversion to .NET 5. If you set the .json based project configuration stuff aside, this jump had a shorter learning curve than we had back in 04' from Web Forms -> MVC.

December 28, 2015 [Week 44]

This was a short week for our ESP development crew due to accrued vacation time. All the work this week was .NET 5 related. 

December 21, 2015 [Week 43]

  • We'll be working on the ASP.NET 5 port for the remainder of this Christmas week.

December 14, 2015 [Week 42]

  • Data export is fully functioning.
  • We mastered AngularJS drag-and-drop.
  • Talking a second run at porting the application to .NET 5 which is in Release Candidate stage.

December 7, 2015 [Week 41]

  • Another week working on data export and a bunch of other complicated UI functions.

November 30, 2015 [Week 40]

  • Begin work on data export.
  • Finish list opt-in

November 23, 2016 [Week 39]

  • More of the same. Another week flushing out the importer.

November 16, 2016 [Week 38]

  • By the numbers: Over the past seven days we made 5 commits to all branches.  A commit or "revision" is an individual change to a file or set of files.  It's like when you save a file.  Every time you save with Git you get a unique ID that allows you to keep a record of what changes were made.  131 files changed and there were 1,623 additions and 784 deletes.  During the past 30 days we've had 15,768++ and 7,098--.  For the life of the Github repo we've added 541,567 lines of code and removed 197,698 lines of code.  The numbers highlight a ton of refactoring as we march forward.  This is not an exercise of getting to a big number of source lines of code (SLOC). This is just a benchmark at Week 38 - these sorts of metrics are meaningless for the most part. 
  • On Monday we worked out Filter List and Pagination client-side code for various screens.
  • On Thursday we got our first look at the data import functionally.  What an elegant solution put forth by Peter and his Dirigo colleague Christopher Belanger. They stream data into a SQL table.  Then they allow the user to configure the data into their dataset(s). If the header row is ‘known’ or has been used historically, then they default the configuration such that the user just needs to confirm the mappings.  Prior imports are held for reuse. Very slick because we’re not writing files to the file system. That would be an issue with a fault tolerant highly available setup (e.g. because we would need to write the CSV file to some logical drive array). The import is multi-threaded and the workload is not done by the web servers – it’s handed off to however many application servers are broadcasting that they’re ready and able to process import data.    

November 9, 2015 [Week 37]

  • Create/Edit Users, Campaigns, Lists and Filters is mostly finished.  The user interface will continue to undergo waves of changes for the forseeable future.  Flushing out a polished interface is an iterative process.
  • Finished client- server-side validation methodology.
  • Worked on the Angular UI Boostrap Alert Box that will signal saved content.
  • A new 10GbE network switch landed.

November 2, 2015 [Week 36]

  • This week we're going to flush out fully some front-end admin screens - the sort that add, edit and delete information in the database. The task at hand it to nail down fully server-side and client-side validation and to work through how AngularJS is going to pass JSON objects to .NET controllers - and then back to the AnguarJS if validation fails. This is difficult UX stuff because we're using tabbed input areas and a validation error will need to return a user to the correct tab inside the interface. Once we have our full model worked out, building other screens should become, for the most part, routine.
  • We thought through how we're going to handle NULL values for boolean data types. MS SQL allows for the use of a three-valued logic because of its special treatment of nulls.  We're going to preserve nullables.
  • We've come full circle on VMWare Enterprise virtual SANs and we're looking at StarWind Software again.  It boils down to money - how much do we want to spend over the life of this project and others.
  • Our existing Port25 PowerMTA server was migrated to ESXi and new hardware.
  • Here's a screenshot from the filtering/segmentation screen.

    ESP list segmentation interface

October 26, 2015 [Week 35]

  • On Wednesday this week at 5:30 am we deployed the new firewalls into production at the host center.
  • All internal DirigoHost servers began to relay email over the ESP.
  • VMWare Host 6 was racked and VMWare Host 7 is being prepared.
  • We've moved back to coding the front-end of the email application - a full week early.  Sweet!
  • On Halloween (Saturday) we ran into issues with how the new firewalls are handing out IPs via DHCP.  We were forced to widen the subnet mask for our network.  Over the next several weeks we'll be updating all of our servers.  This task came earlier than expected. 

October 12, 2015 [Week 32]

We'll be deploying hardware and software for the next 60-100 days.  By November 10th we'd like to be back to coding most of the day.  The front-end of our application is not yet finished. We're working hard to prepare our production environment.  One of our goals here is to get more efficient and dense within a 42U rack.  We need to use space and power wisely and to plan for growth.

  • We're working on the 10GbE switches and vlan setup this week - we need the 10 gig for the virtual SAN
  • As soon as the server rails arrive we'll rack VMWare Host 6
  • We performed migration of a live Windows 2012 server into VShere and worked out some kinks
  • The new firewalls are being readied to replace the Cisco ASA's

October 5, 2015 [Week 31]

As expected we're still deploying new hardware/software.  Here's a high-level of what we worked on this week:

  • Deployed VMWare Host 5
  • VMWare Host 6 is being modified for deployment in < 2 weeks
  • Added three new domain controllers (one at the office and one at the host center) 
  • Deployed VCenter Essentials and Veeam Software to the host center
  • Added the Dirigo 9 and StJoe's CM web servers (not 100% part of the ESP project - but, being used as test beds so that we're comfortable with the new setup well before going live)
  • Finished configuring QA Web, QA APP, and QA DB web servers at the office
  • Reworked Controller 1 security (this is a domain controller)
  • Continued with configuration of Firewall/IDS 1 and Firewall/IDS 2.  We're going to need to rework our VLans and the Subnet mask.  Continued negotiation to purchase two or more clean /24 subnets which will be hung with ARIN and routed from our upstream provider (a very sad state - /24's run about $3,000 each in the IP4 market space right now).  It is difficult for micro customers to get allocations these days.  Buying IP addresses is not uncommon. 
  • Began using a new span of the PowerVault MD1000 DAS at the host center for VM backup
  • Continued the hardware build-out and planning of VMWare Host 1, 2 and 3.  These are the PowerEdge R730xd's
  • Added 10mbps of bandwidth to the host center and met with various parties to ensure that we’ll have appropriate bandwidth, power, cooling, and rack space at an affordable cost
  • We finalized the purchase of the three new firewalls - the manufacturer, in the end, gave us significant concessions because our use of the firewalls fell outside of their pricing model (we're not that big of a company but we need monster firewalls with the ability to start small and scale).  After selecting and testing the firewalls it took a full month and many interactions to get to an acceptable three year cost of ownership.  Normal market selling price on the IDS/firewalls is roughly $34,000.  The monthly subscription for the webserver and network protection security modules cost around $250. The manufacturer is a publically traded firm and approvals were required above the New England sales territory manager. We're pleased with the product and thankful that the manufacturer is allowing us to use their product.  We'll be producing a review of the product and possibly a case study.
  • Peter also managed to do a wee bit of coding

September 28, 2015 [Week 30]

  • A fair chunk of September was dedicated to hardware concerns and that trend will continue for the next several months. 
  • We've replaced our office firewall and are preparing to replace the host center firewall.  The new units will support >15Gb/s of transport and millions of active connections.  Active connections are the issue.  Our multithreaded application opens thousands of connections. The object is to send e-mail fast. Click redirection, opens, and image serving opens a huge number of concurrent connections when sending email at rates exceeding 10 million per hour. Many tasks and services don't immediately release connections so you need 5-10x what you might expect. Even mid-sized enterprise firewalls cannot handle the throughput.  The ones that can handle the traffic could easily set you back north of $30K for a pair.  The trick is to find a robust enterprise firewall at the right price point.  Normal applications like web hosting don't use that many connections.  When we're not sending high volumes of e-mail we tend to sit around 1K active connections for every 15,000 Kbps of traffic. 
  • We’re planning a network switch upgrade to 10Gb/s so that we can support the storage area network (SAN).
  • A new HyperV virtualized server was added to the office to serve as a development server.
  • A new domain controller was added to the office so that we have two and the original domain controller was refactored a bit.
  • We went around the barn a bunch of times on software architecture.  Should we go with VMWare or HperV?  Do we use a SAN or a virtual SAN?  The cost differences are not trivial and each decision has pretty serious down-stream repercussions.  We're fully informed and are still weighing our options and modeling cost differences. Our current web hosting business complicates the selection. We want to use the same technology for hosing as we do for the ESP. Since the ESP project is six years newer than the original hosting platform, we’re using this project as a catalyst to plan a new highly available (HA) hosting environment for our enterprise resort customers. 
  • Of course, this project would be much easier if we could just stick everything in the cloud or hire a group like Zayo to host our application.  But a move like that would deeply alter our want to use commodity hardware and to keep our costs low.

August 31, 2015 [Week 26]

  • Here we are with our kids back in school and the summer over.  Peter's son has gone away to college and David's twins have entered middle school.  How time moves when you're building an e-mail application.  We're busy building out the front-end.  There are many screens to engineer.
  • Created new infrastructure for performing server-side validation.
  • Created an AngularJS template encapsulating the functionality of a single form field of any type, to reduce boilerplate HTML.
  • The first two Dell PowerEdge R730xd Servers have been ordered.  Once here we'll need to outfit with more RAM and enterprise SSD drives.  

August 24, 2015 [Week 25]

  • Implemented the UI for managing and editing Filters.
  • Modified the ContactLoader class so that the retrieval of contacts supports pagination and also supports optional sorting by EmailAddress instead of ContactId.
  • Implemented the UI for finding Contacts via filtering, and displaying them in a server-paginated grid.
  • Implemented the first iteration of the UI for adding/editing Contacts.

August 17, 2015 [Week 24]

  • Upgraded all projects to .NET 4.6.
  • Added insert and delete functionality to the Site Field UI pages.
  • Moved most of the logic of the Profile Fields grid and form pages into “base” controllers, models, and views that can be re-used by similar grid/form pages.
  • Refactored the base controllers to take advantage of generics, as provided by TypeScript.
  • Implemented the UI (and some back-end logic) for managing and editing Campaigns.

August 10, 2015 [Week 23]

  • Created grid and form pages that will serve as the template for all similar pages in the future.
  • Refactored the new grid and form pages—as well as all existing UI pages—so that they’re pure AngularJS applications, rather than Angular/Razor hybrids.

August 3, 2015 [Week 22]

  • This week we decided to port the project to Visual Studio 2015 with ASP.NET 5 which is currently available as a Preview Release. The Release Candidate (first production ready test code) is scheduled for late fall 2015. We're betting that .NET 5 will be released in Q1 2016. If we move to new technology too early it can create unwanted or unneeded cost. If we move too late we've developed something new on a legacy platform. .NET is entering a new era as it embraces Open Source as a core principle. Microsoft is making major new investments—this will be a major update/release. There are risks with moving onto new technology this early, but, we feel that the benefits outweigh the risks.  With client work (your job depends on it) we'd never-ever-ever move this early. In a nutshell, the update to .NET 5 was a complete failure and we abandoned the move after much learning.  We'll attempt the move again in a few months.
  • Completed the implementation of Related Table Data filtering.
  • Finished  creating the “skeleton” of the administrator-facing user interface, including authentication, authorization, navigation, site selection, MVC routing, etc.
  • Switched from LESS to SASS, as the latter seems to have more industry traction at the moment.

July 27, 2015 [Week 21]

  • Very little was accomplished this week due to vacation schedules.

July 20, 2015 [Week 20]

  • Added the ability to filter by URLs and URL Titles—in addition to specific hyperlinks, campaigns, etc.—in Click and Open filtering.
  • Implemented AJAX auto-complete functionality for URLs and URL Titles in Click and Open filtering.
  • Enhanced the logic of the Schedule Update Engine.
  • Refactored the codebase to use far fewer MVC and Web API routing rules (for performance reasons).
  • Modified the logic of “sliding date”-type filtering to automatically use a site-specific time zone component, so that filter expressions such as “Opened any mailing yesterday”, for example, define “yesterday” according to site-specific settings.
  • Began to add support for filtering on Related Table Data (similar to profile field data, but stored in separate database tables).

July 13, 2015 [Week 19]

  • Development team on vacation.

July 6, 2015 [Week 18]

  • We've mostly moved away from data/time functions.  This week we're refining event based criteria (e.g. opened, clicked, not-clicked, bounced). Selections such as send to anyone who has 'clicked on a specific link or links within a mailing', 'clicked on a specific link name within a type of mailings', 'clicked on any links within a group of mailing', 'anyone who has opened an email or a group of emails', etc.  The AngularJS interface for filtering is quite complex. Added classes for response data filtering (i.e., filtering based on bounces, clicks, opens, and sent messages).
  • Created an HTTP Module that automatically adds an assembly version number to the URLs of static content such as Javascript and CSS files. This prevents browsers from caching old versions of static content when a new build of the application is deployed.
  • Implemented the user interface for the various types of response data filtering.

June 29, 2015 [Week 17]

  • We're still working on triggers and the class structure to support a few different date/time functions such as 'send x days before birthday', 'send to individuals over x years old', 'send to individuals between x and y years old', 'send to individuals who opted in prior to dd/mm/yyyy', 'send to individuals who opted in between dd/mm/yyyy and dd/mm/yyyy',  'send to anyone with a birthday next month', 'send to anyone who clicked on link id x from email distribution x in the last x days', 'send to an individual who opened but did not click a link in a group of email distributions between dd/mm/yyy and dd/mm/yyyy'. There are more variations, but, you get the picture.  Here we are in 2015 and there are still lots of little issues with date/time - e.g. leap year and the formatting and storage of dates. The need for this logic has slowed segmentation based on triggers a bit.  We're refactoring a bit and need to be careful to NOT take shortcuts that could impact development 5 years from now in a bad way.  Making sound/tested/vetted code and architecture decisions is super important at this stage.
  • Enhanced the Tracking Engine to support the capture of IP address and user agent string for both clicks and opens.  We will be truncating the user agents at a certain character lenght to reduce table size.  Analized about a million records of historical user agent strings to test method.
  • Enhanced filtering so that profile field expressions now support sliding date ranges (e.g., “Birthdate is 7 days from now—not including today—ignoring the year” or “CreateDate is within the past 2 months”).
  • We've settled on four virtualized Dell PowerEdge R730xd servers, Two PowerEdge R6x0 servers, and some direct attached storage. The R730's are being vetted for RAID speed and SSD support.  Each of the 730's will hold up to 26 hard disks.
  • HAProxy load balanceers have been configured and are running in the development environment.  The load balancers are key to having a highly available service where we can take hardware offline without impacting the service.  We needed to deploy the load balancers now because our application code is being written to take bits of information from the hardware - e.g. which SMTP server is active and available for relay.  We need to bake in the ability for some distributions will have higher priority than others. [This project is the test bed for highly available load balanced hosting for our Ski and Resort clients.]
  • There is still no user interface for our system - we're about 1/3 of the way to the Alpha stage.

June 22, 2015 [Week 16]

  • Up this week is logic that allows segmentation [and triggers] based on previous click, open and send data. We expect this coding to take about two weeks.
  • We still have external/custom dataset work to do.
  • Added many levels of exception handling to the new Tracking Engine (for handling clicks and opens).
  • Added new exception handling to the Merge Engine to handle errors in server-side Javascript. Previously, some exceptions would return no error message at all, and none would display the exact source code that caused the exception. Now, all Javascript exceptions will return relevant error messages, including source code.
  • Refactored the Merge Engine to support additional data—for example, Site settings—that can be used for conditional logic, or for merging into visible content.
  • Created a new “stylesheet inliner” that parses a CSS stylesheet and converts the stylesheet-declares styles into inline styles (i.e., styles that are attached directly to HTML elements). This is important for email because some HTML-based email providers don’t properly support stylesheets but do support inline styles.
  • We'll likely move to front-end UX development in Week 18.

June 15, 2015 [Week 15]

  • Created a new type of filtering based on current subscription status and/or specific subscribe/unsubscribe events.
  • Implemented the front-end code for processing clicks and opens and inserting the appropriate data into MSMQ queues. MSMQ is essentially a messaging protocol that allows applications running on separate servers/processes to communicate in a failsafe manner. A queue is a temporary storage location from which messages can be sent and received reliably, as and when conditions permit.
  • Enhanced the merge engine to automatically discover and track hyperlinks within HTML mailing content.
  • Enhanced the merge engine to automatically insert invisible tracking GIFs into HTML mailing content.
  • Implemented the Tracking Engine for moving click and opens from MSMQ queues into the database; insertions are done in batches, wrapped in explicit transactions, for performance reasons.

June 8, 2015 [Week 14]

We begin to deep-dive into distributions, templates, modules, CMS integration, marketing automation, reporting…

  • Implemented a FileSystemWatcher component to automatically reload application settings whenever the master JSON configuration file is modified (previously, a timer event was used).
  • Finished implementation of Dataflow components in the Mailing Engine.
  • Modified the Merge Engine to support Text content as well as HTML, so that merging is now supported in the Subject, From Name, From Address, and Reply Address settings of a mailing.
  • Modified the Merge Engine to parse and interpret multiple content sources at a time, so that merging of the HTML content, Text content, Subject, From Name, From Address, and Reply Address can happen with a single Javascript method call (for sake of efficiency).
  • Implemented the concept of “Snippets”, which are reusable chunks of HTML and/or Text content that can be used anywhere the Merge Engine is supported. Snippets can also contain references to other snippets, to support nesting scenarios.
  • Created the Schedule Update Engine, which updates the NextBroadcast date on a Schedule when the Schedule is inserted or updated. The updated date is based on the Schedule’s CRON expression and also the number of Broadcasts for that Schedule (relative to the configured maximum values).

June 1, 2015 [Week 13]:

This week we have an application that can build an HTML template, perform complex merges, assemble an email from a segmented marketing database and pass it to the mail transfer agent at >10 million email sends per hour per app server. Small celebration - yippee!  The application lacks a front-end and is nowhere close to completion.  Nonetheless, this is a significant milestone on a road full of milestones.  Onward...   

We performed several days of performance benchmarking tests - we'd like a single app server to process > 10 million sends per hour.  We're having minor problems benchmarking in a non-production environment.  Lots of server cores and multiple SSD fast RAID volumes have a huge impact on performance. Both the hardware and the software need to be finely tuned for our ESP to be commercially viable.  We did calculations on storing 15 billion records to ensure that we can get to the desired scale on budget.  The database architecture is being heavily scrutinized to minimize storage requirements.

We've decided to upgrade Peter's workstation to dual hexa-core Intel server x5670 processors (12 cores) and to add another SSD volume.  This configuration will run on parity with a ~$2,250 production server.

  • Moved lots of SQL into transactions to get improved performance.
  • Integrated Apple Watch markup as a fourth template type (e.g. html, text, www, apple watch).  We're not sure that this template type will stay long-term because it has not undergone RFC scrutiny.  Nonetheless, we cooked it into the feature-set.
  • Finished implementation of the Merge Engine infrastructure, including all AngularJS-style directives.
  • Enhanced the merge engine to support auto-detection of references to related data tables (which allows the appropriate SQL subqueries to be constructed when querying Contact data).
  • Created a multi-threaded SMTP “pool” that handles SMTP delivery to the local MTA and that handles the opening and closing of SMTP connections. The pool is implemented as a static object, so all client code running in the same application domain shares the same pool of SMTP connections. The Mailing Engine handles its own SMTP connections, though, so the SMTP pool will be used primarily for “single-sends” of individual messages (to prevent multiple successive single-sends from each requiring their own SMTP connection).
  • Prototyped various multi-threading constructs for the Mailing Engine—custom thread pool, Parallel.ForEach method, Parallel LINQ query, etc.—before deciding to use the Dataflow components of the Parallel Task Library.

May 25, 2015 [Week 12]:

  • Happy Memorial Day!
  • Created the database schema for “related tables”, in which inserts, updates, and deletes to rows in the RelatedTables table causes the schema of the targeted site-specific table to be modified accordingly.
  • Developed corresponding Data- and Business-layer functionality for working with related tables.
  • Modified the ContactLoader class (which retrieves contact data from SQL Server) to support related tables in addition to contact data, and to package up all the data in JSON format, optionally deserializing the JSON into Contact objects.
  • Built the core infrastructure of the Merge Engine, which allows for embedded Javascript expressions such as {{contact.firstname}} and {{contact.purchases.amount}} to be evaluated server-side.
  • Scrapped the core infrastructure of the Merge Engine and rebuilt it, to allow for the addition of conditional logic, looping, etc.
  • Implemented the following AngularJS-style custom HTML attributes:
    • data-if (for conditional insertion of an HTML element and its children)
    • data-switch, data-switch-when, and data-switch-default (acts similarly to the “switch” statement from Javascript and related languages)
    • data-repeat (loops through arrays of objects—such as related tables—creating a set of HTML elements for each element in the array)
    • data-href, data-src, data-srcset, and data-style (allows dummy values for the href, src, srcset, and style attributes to be used client-side, such as during content creation, but then be replaced by personalized values during server-side merging)

May 18, 2015 [Week 11]:

  • Added asynchronous validation of date/time values via web service calls.
  • Added support for culture-specific date/time formatting (m/d/yyyy, d/m/yyyy, etc.).
  • Asynchronous server-side validation of all data types in the filtering UI.
  • Additional client- and server-side date validation in filters, with support for multiple formats and optional time components.
  • Filters can now be saved as “Saved Filter Expressions” and then incorporated inside other filters.
  • Saved Filter Expressions can optionally be “expanded”, so that instead of referencing the saved expression, the contents of that saved expression will be copied inside the current filter, breaking the link with the saved expression.
  • CSS animations now highlight new expressions that have been added to a filter, which is necessary because new expressions are not necessarily added at the top of the filter.
  • Expression groups are now validated so that a group can’t be added inside another group of the same type; in other words, it isn’t possible to nest a “Meets all these criteria” group inside another “Meets all these criteria” group, because there is no purpose to the nested group in those circumstances.
  • Saved Filter Expressions are now validated to ensure that they aren’t self-referential, to avoid infinitely recursive loops.
  • The Admin project now fully incorporates ASP.NET Identity, with some customization to add site-specific permissions to users.
  • Users of the system can now self-register (picking their own username & password, and supplying additional profile information) by providing a registration code supplied by Dirigo.
  • Two-factor authentication is now supported as an optional (user-selected) security option. Both SMS (via Twilio) and email are supported.
  • Marketing position statement started.
  • Branding effort started.

May 11, 2015 [Week 10]:

  • Refined segmentation UX
  • Modified database schema synchronization logic so that existing columns (i.e., profile fields) can be renamed without losing data.
  • Enhanced the server-side filtering classes to support all the different .NET and SQL data types represented by profile fields.
  • Implemented the SQL logic in all the filtering classes, including formatting, validation, and prevention of SQL injection attacks.
  • Split the filtering UI into two separate AngularJS applications: an outer one containing page submission logic and an inner one containing the filter expression itself.
  • Created logic to allow the two application to communicate with each other during the validation and submission processes, so that successful page submission relies on both client- and server-side validation.
  • Added support (both back-end and in the filtering UI) for A/B Split-type expressions, in addition to profile field expressions.

May 4, 2015 [Week 9]:

  • Deploy working email segmentation engine
  • Created a set of Web API services to handle AJAX requests from the filtering UI.
  • Created the SiteField and FieldType entities, representing site-specific profile fields and their corresponding metadata, as well as web services to provide that data to the filtering UI in order to populate dropdowns and provide client-side data type validation.
  • Incorporated logic to serialize the server-side .NET ViewModel and pass it to the client-side AngularJS application, as a means to provide metadata.
  • Added additional serialization and deserialization logic to account for various differences between Javascript and C#.
  • Incorporated the AngularJS UI Bootstrap date picker component.
  • Refactored the existing Angular controllers, directives, and services to take advantage of TypeScript-specific feature such as classes, interfaces, enums, and strong data typing.

April 27, 2015 [Week 8]:

  • Began architecture of email segmentation engine
  • Going to build it with AngularJS
  • Created scheduling logic that allows mailings to be sent based on a recurring basis (e.g., the first Tuesday of each month at 9:30am). In this model, triggered mailings (those sent in response to some action) will be recurring mailings containing some sort of date-based filtering, as opposed to a separate category of mailings.
  • Expanded the class library for filtering to include the basic functionality necessary to build filters based on profile field expressions (e.g., “State is any of the following ‘ME’,’NH’,’VT’”).
  • Created an AngularJS application that will serve as a modular UI for creating and editing filters.
  • Built a TypeScript class library for filtering that closely resembles the C# class library in the Dirigo.Mail.Core.Business.Filtering namespace, so that serialization and deserialization can happen seamlessly both server-side and client-side.

April 20, 2015 [Week 7]:

  • Added to the schema of existing database tables (columns, indexes, constraints, relationships, triggers).
  • Significantly enhanced the Mailing Engine; created new multi-threading synchronization and implemented the timers that update the Broadcasts table every 10 seconds while a mailing is in progress.
  • Incorporated the DbContextScope component (available via NuGet) to handle DbContext instantiation and lifetime; removed Ninject Depedency Injection code related to DbContext instantiation.
  • Removed the business-layer Models that corresponded to the data-layer EF Entities, after deciding that an extra layer of abstraction wasn’t warranted.
  • Began to construct the class library for “filtering” (segmentation). This task will take weeks to complete.
  • Cleaned up various aspects of the codebase in preparation for a demo to the other Dirigo developers on Thursday afternoon.

April 13, 2015 Update [Week 6]:

  • Created new database tables, along with corresponding keys, indexes, check constraints, and relationships:
    • Broadcasts
    • Campaigns
    • MailingQueue
    • Mailings
    • Messages
    • SubscriptionCategories
    • SubscriptionCategorySettings
    • SubscriptionEvents
    • Subscriptions
    • Templates
  • Created five stored procedures to handle queueing and de-queueing of mailings to be executed.
  • Deleted the Dirigo.Mail.Internal website, which was previously the home of the “back-end” code for generating mailings, and created a Dirigo.Mail.Service application (deployable as a Windows service) to handle the back-end role.
  • Configured the new Dirigo.Mail.Service application to handle service events such as Start, Stop, Pause, Continue, and Shutdown, and to kick off timer events (e.g., Send Scheduled Mailings or Process Incoming Mail) on a configurable schedule.
  • Incorporated Microsoft OWIN code into Dirigo.Mail.Service so that the application is hosting its own website (independent of IIS), effectively making it a console application that also runs as a Windows service and also handles web service requests.
  • Created the guts of the Mailing Engine, which listens for requests to send mailings (via database queue), inserts mailing-related data into the appropriate database tables, executes the SQL to fetch contacts, then generates messages for each contact in a multi-threaded fashion. Much of the infrastructure of the code is now complete, but there is no personalization because I haven’t created the Merge Engine yet.
  • Configured the Mailing Engine so that mailings in progress can be cancelled either via web service call or by setting a non-null value for the CancellationDate column in the Broadcasts table.

April 6, 2015 Update [Week 5]:

  • Created a configuration framework that reads data from a JSON file into strongly-typed classes. Supports multiple versions of settings, to account for different settings in different environments (Development, QA, Production).
  • Configured logging to use log4Net in a similar fashion to the DirigoEdge CMS, but wrapped in Common.Logging, a widely-used open-source wrapper that exposes only the functionality common to all the leading logging frameworks, allowing an easy switch from log4Net to a different framework such as NLog.
  • Created an MSBuild installer script that performs all the steps currently necessary to get Dirigo Mail up-and-running on any machine:
    • Created Event Viewer ever sources
    • Create registry entries identifying the environment (Dev vs. Production), prompting for information as necessary
    • Create all the necessary physical directories under one common root (some for Port 25 PowerMTA, others for shared asset files).
    • Create SQL Server aliases to abstract the machine name in different environments.
    • Create MSMQ queues, which will be used for a variety of purposes.
    • Create a COM+ application to host the Boogie Bounce API; this is necessary so that 32-bit Boogie Bounce is accessible from 64-bit applications, the alternative being to run Dirigo Mail in both 32-bit and 64-bit mode (more of a hassle, I decided).
    • Create IIS application pools with necessary settings.
    • Create IIS websites, applications, and virtual directories.
  • Create a Visual Studio solution containing the following projects:
    • Dirigo.Mail.Web (the website the general public will hit, for landing pages, join pages, link tracking, etc.; no authentication will be required or supported)
    • Dirigo.Mail.Admin (the website clients will use to manage their campaigns; forms authentication will be required, using ASP.NET Identity)
    • Dirigo.Mail.API (client-facing and internal RESTful web services, which probably won’t be active for the first release of the application)
    • Dirigo.Mail.Core (a class library containing business and data logic that will be referenced by all the other projects)
    • Dirigo.Mail.Service (a console application that can be deployed as a Windows service; this will run on each of the app servers, to generate mailings and perform any other CPU-intensive work)
    • Several unit-testing projects
  • Created a centralized caching framework that allows data to be cached in memory on each of the servers but have the cache automatically invalidated by SQL Server when the underlying data changes.
  • Implemented a dependency injection strategy using Ninject.  This allows for looser coupling throughout the application, and it also allows expensive Entity Framework database context objects to be constructed just once per HTTP request, instead of potentially dozens of times.
  • Developed a nearly-fully-functional incoming mail processor for handling bounces, feedback loop messages, postmaster messages, etc.  There’s lots of file-and-directory management logic, as well as database schemas and logic.  BoogieBounce is incorporated into the mail processor via COM+ (as mentioned previously).
  • Developed a framework for distributing requests to multiple database instances (for scalability purposes) based on ranges of ID values.  Connection strings—and, by extension, Entity Framework context objects—must be selected on-the-fly with each request, which gets fairly tricky but is absolutely necessary even though there is currently only one database instance.
  • Created four different databases:
    • DirigoMail_1 (contains data that’s site-specific rather than system-wide; we can create as many different database instances—e.g., DirigoMail_2—as we need in order to scale effectively)
    • DirigoMailShared (contains data that’s system-wide rather than site-specific)
    • DirigoUsers (contains user data for authentication and authorization purposes; we might be able to share this database with DirigoEdge, but Edge is currently using old authentication libraries and isn’t compatible with a migration)
    • DirigoSessionState (contains session data: we store session data at the SQL Server level so that users will be completely unaffected by failing over from one web server to another)
  • Created a set of SQL Server database triggers that perform various behind-the-scenes tasks, such as creating a new site-specific Contact table—with its own schema—each time a new Site is created, and also modifying that schema whenever data in the SiteFields table is modified (essentially, the contents of that table determine the schema of all the site-specific Contact tables).
  • Created a bunch of database tables related to Sites, Contacts, Bounces, etc., along with the necessary indexes, check constraints, and foreign key relationships.
  • Created code-first Entity Framework entities corresponding to those database tables.  Since I’m doing all the database schema work manually, they’re not really “code-first” in the normal sense, but they’re certainly not “database-first”.
  • Created models in the Business layer, corresponding to the EF entities in the Data layer.  Auto-Mapper (a standard open-source ORM mapping tool) helps us create models from entities, and entities from models.

March 30, 2015 Update [Week 4]

  • Not a single line of production code written thus far.
  • Thrashing around to be sure that we button-up deficiencies and plan appropriately.  We've been testing methods and little code projects since week 1.
  • Much deep thinking - this must be done correct from the start.

Project Starts March 9, 2015

  • Several months of business planning took place leading up to the approval of this project.
  • The chosen technology is in the Microsoft stack. That’s where our competency is strongest and since this is a commercial enterprise project we’re not opposed to paying for development tools like Visual Studio 2015, ReSharper, Windows Servers, and Microsoft SQL (Web). 

    Let me talk about licensing for a moment for all those LAMP stack folks who think that using ASP.NET is for idiots. With Nadella at the helm of Microsoft the old status quo is no longer an option. Microsoft has embraced 'Open Source'. Within 18 months we’ll be able to run our Visual Studio IDE on a Mac and our production servers on LINUX.  And we just might. The battle for this project was never about licensing costs - it was between the Java and .NET platforms. And C# is miles ahead of Java in terms of syntactic sugar.

    This will be an ASP.NET, C#, MVC, Angular JS project.

The Brand

The brand name Konvey was suggested by Jamie Ippolito in March 2015.
A first attempt to purchase the domain was made on 6/2/2015.
The Konvey Logo was created on 1/4/2016 by Ryan Dolan.
The domain name Konvey.com was purchased from AfterNic on 1/11/2016.
First use on the brand on our blog was 1/11/2016.
Twitter and Facebook Pages went online on 1/19/2016.
Konvey.com was put online on 1/22/2016.
Trademark protection was sought on 10/13/2016.

 

Thanks!

Thank you for contacting us!

We'll be in touch!

Back Home ×